From 78a753d4d80dcbe5b62d16af3019eb2739d2db3d Mon Sep 17 00:00:00 2001 From: Yuqi Date: Wed, 2 Jan 2019 09:41:41 +0800 Subject: [PATCH 01/54] =?UTF-8?q?=E4=BA=A7=E5=93=81=E7=AE=A1=E7=90=86?= =?UTF-8?q?=E6=80=9D=E7=BB=B4=E6=A8=A1=E5=BC=8F=E9=80=82=E5=90=88=E6=AF=8F?= =?UTF-8?q?=E4=B8=80=E4=B8=AA=E4=BA=BA=20(#4884)?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * Update product-management-mental-models-for-everyone.md * Update product-management-mental-models-for-everyone.md * 注意破折号的使用 * Update product-management-mental-models-for-everyone.md * Update product-management-mental-models-for-everyone.md --- ...t-management-mental-models-for-everyone.md | 238 +++++++++--------- 1 file changed, 119 insertions(+), 119 deletions(-) diff --git a/TODO1/product-management-mental-models-for-everyone.md b/TODO1/product-management-mental-models-for-everyone.md index c46dcf5b25c..54f28e42299 100644 --- a/TODO1/product-management-mental-models-for-everyone.md +++ b/TODO1/product-management-mental-models-for-everyone.md @@ -2,280 +2,280 @@ > * 原文作者:[Brandon Chu](https://blackboxofpm.com/@brandonmchu?source=post_header_lockup) > * 译文出自:[掘金翻译计划](https://github.com/xitu/gold-miner) > * 本文永久链接:[https://github.com/xitu/gold-miner/blob/master/TODO1/product-management-mental-models-for-everyone.md](https://github.com/xitu/gold-miner/blob/master/TODO1/product-management-mental-models-for-everyone.md) -> * 译者: -> * 校对者: +> * 译者:[EmilyQiRabbit](https://github.com/EmilyQiRabbit) +> * 校对者:[zhmhhu](https://github.com/zhmhhu) -# Product Management Mental Models for Everyone +# 产品管理思维模式适合每一个人 ![](https://cdn-images-1.medium.com/max/1600/1*b61UVOBxXM0yEyzLME0tuw.gif) -Mental models are simple expressions of complex processes or relationships. These models are accumulated over time by an individual and used to make faster and better decisions. +思维模式是对复杂的过程和关系的简单表达。这些模式会随着时间的推移由个人逐渐积累,并能让人作出更快更好的决策。 -Here’s an example: _the_ **_Pareto Principle_** _states that roughly 80% of all outputs comes from 20% of the effort._ +一个例子是:帕累托原理(Pareto Principle)指出,大约 80% 的产出来自 20% 的努力。 -In the context of product management, the model suggests that instead of trying to create 100% of the customer opportunity, you may want to look for how to do 20% of the effort and solve 80% of the opportunity. Product teams make this trade off all the time, and the results often looks like feature launches where 20% of customers with more complicated use cases aren’t supported. +在产品管理的语境中,模型建议你应该更希望寻找如何付出 20% 的努力解决 80% 机会的方法,而不是去试图创造 100% 的客户机会。产品团队在一直权衡这一点,结果通常看起来像是特性发布后,20% 的有着更复杂用例的用户没有被支持。 -Mental models are powerful, but their utility is limited to the contexts they were extrapolated from. To combat this, you shouldn’t rely on one or even a few mental models, you should instead be continuously building a _latticework_ of mental models that you can draw from to make better decisions. +尽管思维模式非常强大,但它们的效用取决于它们所处的背景条件。为了防止这种情况发生,你应该不仅仅依赖一个或几个模型,而是应该持续的建造一个思维模型的**框架**,你可以从中吸取教训,做出更好的决定。 -This concept was popularized by Charlie Munger, the famed Berkshire Hathaway vice chairman, in a [speech](https://old.ycombinator.com/munger.html) where he reflected on how to gain wisdom: +查理·芒格(Charlie Munger)普及了这个概念,他是著名的伯克希尔哈撒韦公司副董事长,他在一次[演讲](https://old.ycombinator.com/munger.html)中提到了如何获取智慧: -> What is elementary, worldly wisdom? Well, the first rule is that you can’t really know anything if you just remember isolated facts and try and bang ’em back. If the facts don’t hang together on a latticework of theory, you don’t have them in a usable form. +> 什么是基础的、世间通用的智慧呢?第一条规则是,如果你只是记住孤立的事实,然后进行尝试和重复,你就什么都不知道。如果从这些事实没有总结出一致的理论体系,它们就没有可利用的形式。 -> You’ve got to have models in your head. And you’ve got to array your experience — both vicarious and direct — on this latticework of models. You may have noticed students who just try to remember and pound back what is remembered. Well, they fail in school and in life. You’ve got to hang experience on a latticework of models in your head. +> 你的脑海中已经有很多模型了。你必须要整理你的经验 — 间接的或者直接的都要 — 整理到这个模型框架中。你可能已经注意到有些学生只是试图记住并重复已经记住的东西。好吧,这样他们在学习和生活中都会失败的。你必须要将经验和脑海中的思维模式连接起来。 > -> What are the models? Well, the first rule is that you’ve got to have multiple models — because if you just have one or two that you’re using, the nature of human psychology is such that you’ll torture reality so that it fits your models, or at least you’ll think it does. You become the equivalent of a chiropractor who, of course, is the great boob in medicine. +> 什么是模型呢?第一个规则就是你必须拥有很多模型 — 因为如果你只有一个或两个使用着的思维模型,人类心理自然就会让你将现实世界折射得符合你的模型,或者至少你认为是符合的。你会成为像脊椎按摩师那样的人,毫无疑问,他是医学界的笑话。 > -> It’s like the old saying, “To the man with only a hammer, every problem looks like a nail.” And of course, that’s the way the chiropractor goes about practicing medicine. But that’s a perfectly disastrous way to think and a perfectly disastrous way to operate in the world. So you’ve got to have multiple models. +> 就像老话说的,“对于一个只有一把锤子的人来说,每一个问题都像一颗钉子。”当然,这也就是脊椎指压治疗者对医学的处理方式。但这是一种世界上的极其灾难性的思维和操作方式。所以你必须有多个模型。 -**This post outlines some of the most useful mental models that I’ve accumulated in my career in Product Management.** As I learn new models, I’ll continually update the post. +**这篇博客概述了一些在我职业生涯中积累下的最有用的思维模型**。如果我又获悉了新的模型,我将会持续更新博客。 -This is also **not** a post for just product managers, it’s for everyone that works on products. Product thinking is not sacred to the role of a PM, in fact, it’s even _more useful_ in the hands of the builders than PMs. +这篇博客也**不**仅仅适用于产品经理,而是所有为产品作出工作的人们。产品思维对于产品经理的作用并不是神圣的,事实上,它被构建者掌握可能更**有用**。 -#### The mental models we’ll cover are structured into the following categories: +#### 我们将会提及的思维模型被分为如下几类: -1. Figuring out Where to Invest -2. Designing and Scoping -3. Shipping and Iterating +1. 找到投资点 +2. 设计与范围 +3. 运输与迭代 * * * -**_Figuring out Where to Invest_ — **the next set of mental models are useful for deciding what your team should build, or “invest in”, next. +**找到投资点 —** 接下来的这一组思维模型对于决策团队应该构建或“投入”什么非常有用。 -### 1. Return on Investment +### 1. 投资回报 -A finance concept: for every dollar you invest, how much are you getting back? In product, think of the resources you have (time, money, people) as what you’re “investing”, and the return as impact to customers. +一个财务概念:你投资的每一元,获取到了多少回报?在产品中,把你拥有的资源(时间,财力,人员)当成是投资,把对客户的影响当成是回报。 ![](https://cdn-images-1.medium.com/max/800/1*WzqwU7lp6E5nRgART7XJxw.png) -#### How it’s useful +#### 如何应用 -The resources available to a product team are time, money, and [the number and skill of] people. When you’re comparing possible projects you could take on, you should always choose the one that _maximizes impact to customers for every unit of resources you have._ +对于一个产品团队来说,可用的资源包括时间,金钱以及人的数量和能力。当你在对比手里的项目时,你应当选择那个能够**使你手里的每一个资源对客户产生最大化影响力**的项目。 -### 2. Time value of shipping +### 2. 交付的时间价值 -Product shipped earlier is worth more to customers than product shipped at a later time. +对于客户而言,产品提前交付要比延后更加有价值。 ![](https://cdn-images-1.medium.com/max/800/1*JVAnPRwoPhnVSKWN2oQRDw.png) -#### How it’s useful +#### 如何应用 -When deciding between problems/opportunities to invest in, you can’t just compare the benefits of different features you could build (if you did, you would always choose the biggest feature). +当在困难或者机会中抉择如何投入的时候,你应该不仅仅权衡你构建的不同功能所能获取的收益(当然如果你这样做了,你一定会选择收益最大的那个功能)。 -Instead, to make good investment decisions, you also have to consider how quickly those features will ship, and place more value on features that will ship faster. +相反的,为了作出最好的投资决策,你也要考虑这些功能的交运速度,并对那些能够更快交运的功能多加关注。 -### 3. Time Horizon +### 3. 时间范围 -Related to the _Time Value of Shipping,_ the right investment decision changes based on the time period you are optimizing for. +和**运输的时间价值**相关,最佳决策会根据优化的时间段而变化。 ![](https://cdn-images-1.medium.com/max/800/1*lD889xYiJSidoYfrzDO5SA.gif) -Given a long enough time horizon, the cost of a 3 month vs. 9 month build is insignificant. +考虑到足够长的时间范围,3 个月与 9 个月构建的成本都是微不足道的。 -#### **How it’s useful** +#### 如何应用 -Choosing to ask _“How can we create the most impact in the next 3 months?”_ or _“How can we create the most impact in the next 3 years?”_ will result in dramatically different decisions for your team. +选择追问 **“你在接下来三个月中如何能够获取最大影响”** 或者 **“你在接下来三年中如何能够获取最大影响”** 所能够导致的结果将会非常不同。 -It follows then that aligning with your team and stakeholders about what time horizon to optimize for is often the first discussion to have. +接下来,通常情况下团队和股东的第一次讨论通常都是协调关于在哪个时间范围上进行优化。 -### 4. Expected Value +### 4. 期望值 -Predicting the future is imperfect. Instead, all decisions create probabilities of multiple future outcomes. The probability-weighted sum of these outcomes is the _expected value_ of a decision. +预测未来是不准确的。相反,所有的决策都会给未来带来多种可能的结果。这些结果的概率加权和就是决策的**期望值**。 ![](https://cdn-images-1.medium.com/max/800/1*_QclBIKqkgEehi61jVu7xQ.png) -#### How it’s useful +#### 如何应用 -When considering impact of a project, map out all possible outcomes and assign probabilities. Outcome variability typically includes the probability it takes longer than expected and the probability that it fails to solve the customer problem. +当考虑项目的影响时,找出所有的可能结果并分配概率。结果的可变范围通常包括它可能会需要比期望更长的时间,以及它可能并没有解决用户提出的问题。 -Once you lay out all the outcomes, do a probability-weighted sum of the value of the outcomes and you’ll have a better picture on the return you will get on the investment. +一旦你列出了所有的结果,并对所有结果的价值进行概率加权求和,你就能更好地了解投资的收益。 * * * -**_Designing and Scoping _— **the next set of mental models are useful for scoping and designing a product after you’ve chosen where to invest. +**设计和范围 —** 在选好了投资目标后,下一组思维模式对于确定产品的范围和设计非常有用。 -### 5. Working Backwards (Inversion) +### 5. 反向工作(反转) -Instead of starting at a problem and then exploring towards a solution, start at a perfect solution and work backwards to today in order to figure out where to start. +为了找出应该从哪里入手,应该从一个完美的解决方案开始,然后反向工作,而不是从一个有问题的部分开始,然后寻找解决方案。 ![](https://cdn-images-1.medium.com/max/800/1*v-dFL3r4rPFo6xPjr0VQ8w.png) -Note that working backwards isn’t universally better, it just creates a different perspective. +注意,反向工作并不总是更好的方法,它仅仅是创造出了一个不同的视角。 -#### How it’s useful +#### 如何应用 -Most teams tend to _work_ _forwards,_ which optimizes for what is practical at the cost of what’s ultimately impactful. +大多数团队更倾向于**正向工作**,它优化了实用性,但是代价却是最终的影响力。 -Working backwards helps you ensure that you focus on the most impactful, long term work for the customer because you’re always reverse-engineering from a perfect solution for them. +反向工作则帮助你确保你将专注于对用户最有影响力的长期工作上,因为你总是在一个完美的方案上逆向开展工程。 -Note that working backwards isn’t universally better, it just creates a different perspective. It’s healthy to plan using both perspectives. +注意,反向工作并不总是更好的方法,它仅仅是创造出了一个不同的视角。同时采用两种不同视角进行规划也是很不错的。 -### 6. Confidence determines Speed vs. Quality +### 6. 对产品的信心决定速度与质量 -The confidence you have in i) the importance of the problem your solving, and ii) the correctness of the solution you’re building, should determine how much you’re willing to trade off speed and quality in a product build. +你所拥有的信心 i) 对于解决问题非常重要,并且 ii) 你正在构建的解决方案的正确性,应该会决定你在产品构建中,多大程度上愿意去权衡速度和质量。 ![](https://cdn-images-1.medium.com/max/800/1*rqE-5eVKXLmkVLFux92d0g.png) -#### How it’s useful +#### 如何应用 -This mental model helps you to build a barometer to smartly trade off speed and quality. It’s easiest to explain this by looking at the extreme ends of the spectrum above. +这个思维模型可以帮助你建立一个晴雨表,用来更巧妙地权衡速度和质量。通过观察上面频谱的两端,解释这一点是很容易的。 -**On the right side:** you have confidence (validated through customers) that the problem you’re focused on is really important to customers, _and_ you know exactly what to build to solve it. In that case, you shouldn’t take any shortcuts because you know customers will need this important feature forever, so it better be really high quality (e.g. scalable, delightful). +**在最右侧**:对于用户来说,你正在专注的问题十分重要,对于这点你十分自信(经过用户验证,你的自信还会巩固),**并且**你十分肯定解决问题需要构建什么。在这种情况下,你不应该接受任何缺陷,因为你知道用户将来将会十分需要它,所以最好是高质量的(例如:可扩展性,用户友好)。 -**Now let’s look at the left side:** you haven’t even validated that the problem is important to customers. In this scenario, the longer you invest in building, the more you risk creating something for a problem that doesn’t even exist. Therefore, you should err on launching something _fast_ and getting customer validation that it’s worth actually building out well. For example, these are the types of situations where you may put up a landing page for a feature that doesn’t even exist to gauge customer interest. +**现在我们再看看左侧**:你甚至还没能确定这个问题对用户是否重要。在这种场景下,你投入项目构建的时间越长,就有大风险会生成某些之前不存在的问题。因此,你应该推出某些能**快速**迭代的产品来试错,并获得用户的认可,证实它确实值得最终高质量的构建。例如,在这些情况下,你可以为一个甚至是现在还不存在功能配置落地页面,来衡量客户的兴趣。 -### 7. Solve the Whole Customer Experience +### 7. 全面解决用户体验问题 -Customer experiences don’t end at the interface. What happens before and after using the product are just as important to design for. +用户体验不止于界面。在用户使用产品之前和之后会如何,也是很重要的设计点。 ![](https://cdn-images-1.medium.com/max/800/1*D_JfpzBPTU906raJzZVNPA.png) -#### How it’s useful +#### 如何应用 -When designing a product, we tend to over focus on the in-product experience (e.g. the user interface, in software). +在设计产品时,我们趋向于过度关注产品内的用户体验(比如软件内部的用户界面)。 -It’s just as important to design the marketing experience (how you acquire customers and set their expectations for the product before they use it), and the support/distress experience (how your company handles the product failing). +但是市场体验的设计(如何获取用户,以及如何在他们使用产品之前设定他们对产品的期望),还有产品支持/事故的用户体验(公司如何应对产品失误)也是同等重要的。 -Creating great distress experiences, in particular, are amazing opportunities to earn long term customer trust. For example, Amazon earns the most trust from you as a customer _when you have to return something._ +特别的,产生较大产品事故时的体验,是获取用户长期信任的大好机会。例如,**当作为客户的你不得不退回某些东西的时候**,亚马逊赢得了你很大部分的信任。 -### 8. Experiment, Feature, Platform +### 8. 实验,特征,平台 -There are three types of product development: Experiments, Features, and Platforms. Each have their own goal and optimal way to trade-off speed and quality. +产品开发有三种类型:实验,特征和平台。每个都有他们自己的目标,以及最佳方式来权衡速度和质量。 ![](https://cdn-images-1.medium.com/max/800/1*ilzmNU-5V1n8w4FLen4nVA.png) -#### How it’s useful +#### 如何应用 -By recognizing the type of product development your project is, you will define more appropriate goals for each type, and you will right-size the speed and quality trade off that you make. +认清项目的产品开发类型,你将能够为每个类别定义更合适的目标,并且为速度和质量与产品之间的权衡找到合适点。 -Experiments are meant to output _learning_, so that you can invest in new features or platforms with customer validation. If you optimize for learning, you will consider doing things that otherwise wouldn’t be palatable: for example using hacky code that you intend to throw away, and faking sophisticated software when it’s just humans doing it behind the scenes. +实验意味着要给出**经验**,因此你可以投入新的功能或者用户认可的平台。如果你是为了获取经验而作出的优化,你要考虑那些不经过实验就不合适的事情:比如使用你打算丢弃的 hacky 代码,以及伪造复杂的软件,它们只是人们仅在幕后会做的事情。 -In contrast to experiments, platforms are forever. Other people will build features on top of them, and as such making changes to the platform after it’s live is extremely disruptive. +和实验相反,平台则是永久性的。其他人会在它们之上建立新的特性,因此,在平台生效后对平台进行更改是非常具有破坏性的。 -Therefore, platform projects need to be very high quality (stability, performance, scalability, etc.) and they need to actually enable useful features to be built. A good rule of thumb when building platform is to build it with your first consumer, i.e. have another team simultaneously building a feature on your platform while you’re developing it — this way, you guarantee the platform actually enables useful features. +因此,平台项目需要高质量(稳定,性能良好,可扩展,等等),并且它们需要实际构建有用的功能。构建平台最好的法则就是和你的第一个客户一起构建它,即和一个在平台之上创建功能的团队同步开发 -- 这样,你就能确保平台实际上启用了可用的功能。 -### 9. Feedback Loops +### 9. 反馈循环 -Cause and effect in products are the result of systems connected by positive and negative feedback loops. +产品中的因果关系,是由正面和负面反馈回路所连接的系统产生的结果。 ![](https://cdn-images-1.medium.com/max/800/1*eIrnHqDy24SmYTM5VT9uBw.png) -#### How it’s useful +#### 如何应用 -Feedback loops help us remember that some of the biggest drivers of growth or decline for a product may be from other parts of the system. +反馈循环帮助我们记住,一个产品增长和下降的最大驱动因素可能来自系统的其他部分。 -For example, say you’re the payments team and your KPI is to grow total credit card payments processed. You have a positive feedback loop with the user acquisition team because as they grow users, you have more potential users that will pay with credit cards. However, you have a negative feedback loop with the cash payments team, who are trying to help users more easily to transact through cash. +例如,比如你是支付团队,你的 KPI 就是增加信用卡支付总额。你和用户获取的团队之间存在一个正向反馈循环,因为当他们增长了用户,你就有更多将会用信用卡支付的潜在用户。但是,你和现金支付团队之间有一个负向反馈循环,他们的目标是帮助用户更方便的使用现金交易。 -Knowing these feedback loops can help you change strategy (e.g. you may choose to work on general user acquisition as the best way to grow payment volume), or understand negative changes in your metrics (e.g. credit card payment volume is down, but it’s because the cash payments team is doing really well, not because the credit card products suck). +知道了这些反馈循环,可以帮助你调整方案(例如,你可以选择将获取一般用户作为增加支付量的最佳方式),或者了解指标的负面变化(例如,信用卡支付量下降,但是是因为现金支付团队做得很好,而不是信用卡产品糟糕)。 -### 10. Flywheel (recursive feedback loop) +### 10. 飞轮(递归反馈循环) -A state where a positive or negative feedback loop is feeding on itself and accelerating from it’s own momentum. +这是一个正向或逆向反馈回路能够以自身的动量为动力并加速的状态。 ![](https://cdn-images-1.medium.com/max/800/1*dQZTwGbDzYxyehti9NdUNg.png) -#### How it’s useful +#### 如何应用 -Flywheels are a related concept to feedback loops, but are important for managing platforms and marketplaces. For example, imagine you run Apple’s iOS app platform. You have two users: app developers, and app users. +飞轮和反馈循环是相关的概念,但是对于管理平台和交易市场非常重要。例如,假设你在运行苹果 iOS 应用平台。你有两种用户:应用开发者,和应用使用者。 -The flywheel is the phenomenon where more app users attract more app developers (because there is more opportunity to sell), which in turn attract more app users (because there are more apps to buy), which in turn attract more app developers, and so on. As long as you nurture the flywheel, not only will you grow, but you’ll grow at an accelerating rate. +飞轮就是更多应用用户吸引到更多应用开发者(因为有更多的机会可以售卖应用),然后反过来又会吸引更多的应用使用者(因为有更多的应用可以买),反过来又继续吸引更多的开发者,这样循环的现象。只要你培育好这个飞轮,就不仅是普通的增长,增长的速度也会提高。 -If you’re managing a flywheel, you have to do everything you can to keep it spinning in the positive direction, because it’s just as powerful the other way. For example, if there are so many apps on the platform that new apps can’t get discovered anymore, app developer growth will slow and break the flywheel — you need to solve that. +如果你正在管理一个飞轮,你就要倾尽全力来确保它向正向旋转,因为它逆向旋转的时候力量同样很大。例如,如果平台上的应用过多,新的应用无法被发现,应用开发者的增长速度就会下降,这将打破飞轮 -- 你需要解决类似这样的问题。 * * * -**_Building & Iterating _— **the next set of mental models are useful for when you’re building, operating, and iterating an existing product. +**构建和迭代 —** 当你在构建、操作和迭代已有产品的时候,下一个系列的思维模型就非常有用。 -### 11. Diminishing Returns +### 11. 收益递减 -When you focus on improving the same product area, the amount of customer value created over time will diminish for every unit of effort. +当你专注于改进相同的产品领域时,随着时间的推移,每一份努力创造的客户价值也将随之减少。 ![](https://cdn-images-1.medium.com/max/800/1*4Mk9GlI3Wze0M80vwdIpqA.png) -#### How it’s useful +#### 如何应用 -Assuming you are effectively iterating the product based on customer feedback and research, you will eventually hit a point where there’s just not that much you can do to make it better. It’s time for your team to move on and invest in something new. +假设你正在基于用户反馈和调研对产品进行有效的迭代,你最终都会达到那个无法使产品做得更好的程度。 那就是您的团队继续前进并投资新事物的时候了。 -### 12. Local Maxima +### 12. 本地最大 -Related to _diminishing returns_, the local maxima is the point where incremental improvements creates no customer value at all, forcing you to make a step change in product capabilities. +和收益递减相关,本地最大指的是增加优化但根本不产生用户价值,它迫使你在产品功能上作出改进。 ![](https://cdn-images-1.medium.com/max/800/1*G5jRnVstTrkTKcFVLozunw.png) -#### How it’s useful +#### 如何应用 -This mental model is tightly related to diminishing returns, with the addition of hitting a limit where it literally makes no material difference to keep improving something. _Iteration_ now serves no purpose, and and the only way to progress is to _innovate._ +这个思维模型与收益递减紧密相关,增加了一个限制,继续维持改进实际上将没有任何实质性的差异。**迭代**现在已经毫无意义,唯一继续的方法是**革新**。 -This concept was recently popularized by Eugene Wei’s viral post [Invisible Asymptotes](http://www.eugenewei.com/blog/2018/5/21/invisible-asymptotes), which covers an example like this that Amazon foresaw which led them to create Prime. +最近,由于 Eugene Wei 的病毒帖 [Invisible Asymptotes](http://www.eugenewei.com/blog/2018/5/21/invisible-asymptotes),这个概念很流行,帖子中包含这样一个例子:亚马逊的预见引领他们创造了 Prime。 -### 13. Version two is a lie +### 13. 第二版是谎言 -When building a product, don’t bank on a second version ever shipping. Make sure the first version is a complete product because it may be out there forever. +在构建产品时,请不要依赖版本二的产品。确保版本一是一个完整的产品,因为它可能永远存在。 ![](https://cdn-images-1.medium.com/max/800/1*m2032S9-aWtyxgLnKwBCdg.png) -When software was sold on shelves, teams had to live with version 1 forever. +当软件在货架上出售时,团队必须永远使用版本一。 -#### How it’s useful +#### 如何应用 -When you’re defining the first version of your product, you will accumulate all sorts of amazing features that you dream of adding on later in future versions. Recognize that these may never ship, because you never know what can happen: company strategy changes, your lead engineer quits, or the whole team gets reallocated to other projects. +当你定义产品的第一个版本时,你将聚集所有你希望在以后的版本中添加的令人惊叹的功能。你要认识到,一些功能可能永远也不会发布,因为你不知道将会发生什么:公司政策调整,你的工程师总监辞职,或者你的团队被重新分配到了其他项目中。 -To hedge against these scenarios, make sure that whatever you ship is a “complete product” which, if it was never improved again, would still be useful to customers for the foreseeable future. Don’t ship a feature that relies on future improvements in order to actually solve the problem well. +为了避开这样的场景,确保你发布的是一个“完整的产品”,就算它再也不升级,在可预见的未来里,它对于用户来说也依旧是可用的。不要发布一个依赖于未来改进才能解决问题的新功能。 -### 14. Freeroll +### 14. 免费锦标赛 -A situation where there is little to lose and lots of gain by shipping something fast. +指的是通过快速发布,损失很少而收益很大的场景。 ![](https://cdn-images-1.medium.com/max/800/1*eSuVg7xMVDoUCXtCEmrsrA.jpeg) -#### How it’s useful +#### 如何应用 -_Freerolls_ typically emerge in product when the current user experience is so bad that by making any reasonable change based on intuition is likely to make it much better. They are different than fixing bugs because bugs refer to something that’s not working as designed. +**免费锦标赛**通常在这样的产品中出现:当前的用户体验非常差,以至于作出任何一点基于直觉的合理的改变都很有可能带来很大改善。它们与修复 bug 不同,因为 bug 指的是没有按照设计工作的东西。 -If you’re in a situation where your team is thinking, _“Let’s just do something… we can’t really make it any worse”_, you likely have a freeroll in front of you. +如果你正在这样一种情境下:你的团队正在考虑,**“我们就这么做吧...反正也不能更糟了”**,也许你面前就正有一个免费锦标赛的机会。 -([r/CrappyDesign](https://www.reddit.com/r/CrappyDesign/) on Reddit is a treasure trove of such situations) +(Reddit 上的 [r/CrappyDesign](https://www.reddit.com/r/CrappyDesign/) 是一个上述情况的宝库) -### 15. Most value is created after version one +### 15. 大部分价值在第一个版本之后被创造出来 -You will learn the most about the customer after you launch the product, don’t waste the opportunity to build on those learnings. +在你发布了产品之后,你将会从用户那里学到很多,不要浪费了在这些经验之上构建产品的机会。 ![](https://cdn-images-1.medium.com/max/800/1*20d6A7nktJGqINckdVSelQ.png) -#### How it’s useful +#### 如何应用 -Everything is a hypothesis until customers are using the product at scale. While what your team invests in “pre-launch learning” — the customer interviews, prototype testing, quantitative analysis, beta testing, etc. — can give you a massive leg up on the probability of being right, there are always behaviours and edge cases that emerge once you ship the feature to 100% of customers. +在用户大规模地使用产品之前,一切都是假说。你的团队在发布前所投入的 — 用户访问,原型测试,定量分析,测试,等等 -- 都能大幅度支持你成功的可能性,一旦你将功能发布给所有的用户,总会出现产品行为(异常)和偏离的情况。 -As a percentage of customer insight learned, you will gain the majority of learning _after_ launch. To not investing accordingly by iterating the product (sometimes drastically), doesn’t make sense with that in mind. +学习一部分客户的审美价值后,你将会在产品发布**后**获取很多知识经验。如果不通过迭代产品(有时是大幅度的修改)进行相应的投入,那么学到了这些也是毫无意义的。 -### 16. Key Failure Indicator (KFI) +### 16. 失败的关键指标(KFI) -Pairing your Key Performance Indicators (KPIs) with metrics you _don’t_ want to see go in a certain direction, to ensure you’re focused on healthy growth. +将关键绩效指标(KPI)与你**不希望**看到的指标走向配对,以确保你专注于产品的健康成长。 ![](https://cdn-images-1.medium.com/max/800/1*ks8KCX4L9LVP88fiJzb_xQ.png) -#### How it’s useful +#### 如何应用 -Teams often choose KPIs that directly reflect the positive outcomes they’re looking for, without considering the negative ways that those outcomes could be achieved. Once they start optimizing for those KPIs, they actually create output that is net bad for the company. +团队经常选择能够直接反应他们正在寻求的正面结果的 KPI,而不考虑实现这些结果的负面影响。一旦他们开始为这些 KPI 优化,对于公司,他们实际上创造的是很糟糕的产出。 -A classic example is a team thinking they’re successful when doubling sign-up conversion on the landing page, only to observe (far too late) that the number of total customers isn’t growing because the conversion rate dropped by 60% due to the same change. +一个经典的例子是,当登陆页面上注册转换翻倍了的时候,一个团队认为他们取得了成功,但是却发现(为时已晚)总用户量并没有增长,因为由于相同的变化,转换率下降了 60%。 -KFIs keep your team’s performance in check, and make sure that you only create net-healthy outputs for the company. +KFI 能控制团队绩效,并确保您只为公司创建净健康产出。 -**Examples of popular KPI <> KFI pairings are:** +**流行的 KPI <> KFI 配对的例子包括:** -1. Grow revenue while maintaining gross margin -2. Grow adoption of feature A without taking away adoption of feature B -3. Grow adoption of feature A without increasing support load +1. 维持毛利率的同时增加收入 +2. 在不取消采用功能 B 的情况下,逐步采用功能 A +3. 在不增加支持负载的情况下,逐步采用功能 A * * * -### A latticework, not a checklist +### 使用框架,而不是清单 -It may be unsatisfactory to many readers, but as far as I can tell there is no methodology for using these mental models. If you try and use them as a checklist — going through each and seeing if they apply them — you will end up doing to mental gymnastics that will confuse and frustrate you and those around you. +对于读者来说,它可能并不让人很满意,但是据我所知,并没有使用这些思维模型的方法论。如果你试图将他们作为一个清单来使用 — 逐个尝试每一种方法来看看它们是否适用 —— 你最终会发现你只是在做思维训练,这会让你和周围的人都感到困惑和沮丧。 -Instead, they simply become part of your latticework, helping you make better decisions about product, and giving you the language to communicate the why behind complex decisions to your team. +相反,它们成为了你的思维框架的一部分,帮助你做出更好的产品决策,并为您提供一种语言,以便为您的团队传达复杂决策背后的原因。 -As you accumulate more models, ideally through experience, the better you will get. +当你累积了更多的模型,同时模型随着经验的累计变得更理想,你将会更加优秀。 > 如果发现译文存在错误或其他需要改进的地方,欢迎到 [掘金翻译计划](https://github.com/xitu/gold-miner) 对译文进行修改并 PR,也可获得相应奖励积分。文章开头的 **本文永久链接** 即为本文在 GitHub 上的 MarkDown 链接。 From 5aa7790eee72b8cad51cf48043ce225ecfc72bd4 Mon Sep 17 00:00:00 2001 From: Tom Huang Date: Wed, 2 Jan 2019 09:55:39 +0800 Subject: [PATCH 02/54] =?UTF-8?q?=E5=80=BC=E7=B1=BB=E5=9E=8B=E5=AF=BC?= =?UTF-8?q?=E5=90=91=E7=BC=96=E7=A8=8B=20(#4909)?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * 值类型导向编程(#4901) * value-oriented-programming: 根据review反馈修改文案 * value-oriented-programming: 标点符号修改 * Update value-oriented-programming.md --- TODO1/value-oriented-programming.md | 110 ++++++++++++++-------------- 1 file changed, 54 insertions(+), 56 deletions(-) diff --git a/TODO1/value-oriented-programming.md b/TODO1/value-oriented-programming.md index bfe8b01d832..589387ffc9c 100644 --- a/TODO1/value-oriented-programming.md +++ b/TODO1/value-oriented-programming.md @@ -2,14 +2,14 @@ > * 原文作者:[MattDiephouse](https://matt.diephouse.com) > * 译文出自:[掘金翻译计划](https://github.com/xitu/gold-miner) > * 本文永久链接:[https://github.com/xitu/gold-miner/blob/master/TODO1/value-oriented-programming.md](https://github.com/xitu/gold-miner/blob/master/TODO1/value-oriented-programming.md) -> * 译者: -> * 校对者: +> * 译者:[nanjingboy](https://github.com/nanjingboy) +> * 校对者:[Bruce-pac](https://github.com/Bruce-pac) -# Value-Oriented Programming +# 值类型导向编程 -At WWDC 2015, in a very influential session titled [_Protocol-Oriented Programming in Swift_](https://developer.apple.com/videos/play/wwdc2015/408/), Dave Abrahams explained how Swift’s protocols can be used to overcome some shortcomings of classes. He suggested this rule: “Don’t start with a class. Start with a protocol”. +在 2015 WWDC 大会上,在一个具有影响力的会议([面向协议的 Swift 编程](https://developer.apple.com/videos/play/wwdc2015/408/))中,Dave Abrahams 解释了如何用 Swift 的协议来解决类的一些缺点。他提出了这条规则:“不要从类开始,从协议开始”。 -To illustrate the point, Dave described a protocol-oriented approach to a primitive drawing app. The example worked from a few of primitive shapes: +为了说明这一点,Dave 通过面向协议的方法描述了一个基本绘图应用。该示例使用了一些基本形状: ``` protocol Drawable {} @@ -28,21 +28,21 @@ struct Diagram: Drawable { } ``` -These are value types. That eliminates many of the problems of an object-oriented approach: +这些是值类型。它解决了面向对象方法中的许多问题: -1. Instances aren’t shared implicitly - - The reference semantics of objects add complexity when passing objects around. Changing a property of an object in one place can affect other code that has access to that object. Concurrency requires locking, which adds tons of complexity. - -2. No problems from inheritance - - Reusing code via inheritance is fragile. Inheritance also couples interfaces to implementations, which makes reuse more difficult. This is its own topic, but even OO programmers will tell you to prefer “composition over inheritance”. - -3. No imprecise type relationships - - With subclasses, it’s difficult to precisely identify types. e.g. with `NSObject.isEqual()`, you must be careful to only compare against compatible types. Protocols work with generics to precisely identify types. +1. 实例不能隐式共享 -To handle the actual drawing, a `Renderer` protocol was added that describes the primitive drawing operations: + 对象的引用在对象传递时增加了复杂性。在一个地方改变对象的属性可能会影响有权访问该对象的其他代码。并发需要锁定,这增加了大量的复杂性。 + +2. 无继承问题 + + 通过继承来重用代码的方式是脆弱的。继承还将接口与实现耦合在一起,这使得代码重用变得更加困难。这是它的特性,但即使是使用面向对象的程序员也会告诉你他更喜欢“组合而不是继承”。 + +3. 明确的类型关系 + + 对于子类,很难精确识别其类型。比如 `NSObject.isEqual()`,你必须小心且只能与兼容类型比较。协议和泛型协同工作可以精确识别类型。 + +为了处理实际的绘图操作,我们可以添加一个描述基本绘图操作的 `Renderer` 协议: ``` protocol Renderer { @@ -52,7 +52,7 @@ protocol Renderer { } ``` -Each type could then `draw` with a `Renderer`. +然后每种类型都可以使用 `Renderer` 的 `draw` 方法进行绘制。 ``` protocol Drawable { @@ -83,7 +83,7 @@ extension Diagram : Drawable { } ``` -This made it possible to define different renderers that worked easily with the given types. A main selling point was the ability to define a test renderer, which let you verify drawing by comparing strings: +这使得定义根据给定类型并能为此轻松工作的各种渲染器变的可能。一个最主要的卖点是定义测试渲染器的能力,它允许你通过比较字符串来验证绘制: ``` struct TestRenderer : Renderer { @@ -96,7 +96,7 @@ struct TestRenderer : Renderer { } ``` -But you could also easily extend platform-specific types to make them work as renderers: +你也可以轻松扩展平台特定的类型,使其成为渲染器: ``` extension CGContext : Renderer { @@ -118,7 +118,7 @@ extension CGContext : Renderer { } ``` -Lastly, Dave showed that you can extended the protocol to provide conveniences: +最后,Dave 表明你可以通过扩展协议来提供方便: ``` extension Renderer { @@ -128,11 +128,9 @@ extension Renderer { } ``` -I think that approach is pretty compelling. It’s much more testable. It also allows us to interpret the data differently by providing separate renderers. And value types neatly sidestep a number of problems that an object-oriented version would have. +我认为这种方法非常棒,它具有更好的可测试性。它还允许我们通过提供不同的渲染器,从而使用不同的方式解释数据。并且值类型巧妙地回避了面对对象版本中可能遇到的许多问题。 -But I think there’s a better way to write this code. - -Despite the improvements, logic and side effects are still tightly coupled in the protocol-oriented version. `Polygon.draw` does 2 things: it converts the polygon into a number of lines and then renders those lines. So when it comes time to test the logic, we need to use `TestRenderer`—which, despite what the WWDC talk implies, is a mock. +虽然有所改进,但逻辑和副作用仍然在面向协议的版本中强度耦合。`Polygon.draw` 做了两件事:它将多边形转换为多条线,然后渲染这些线。因此,当需要测试这些逻辑时,我们需要使用 `TestRenderer` — 尽管 WWDC 暗示它只是一个模拟。 ``` extension Polygon : Drawable { @@ -145,7 +143,7 @@ extension Polygon : Drawable { } ``` -We can separate logic and effects here by turning them into separate steps. Instead of the `Renderer` protocol, with `move`, `line`, and `arc`, let’s declare value types that represent the underlying operations. +我们可以将逻辑和效果拆分成不同的步骤来区分它们。使用 `move`、`line` 和 `arc` 来替代 `Renderer` 协议,让我们声明代表这些底层操作的值类型。 ``` enum Path: Hashable { @@ -168,7 +166,7 @@ enum Path: Hashable { } ``` -Now, instead of calling those methods, `Drawable`s can return a set of `Path`s that are used to draw them: +现在,`Drawable` 可以通过返回一组用于绘制的 `path` 来替代方法调用: ``` protocol Drawable { @@ -198,7 +196,7 @@ extension Diagram : Drawable { } ``` -And now `CGContext` to be extended to draw those paths: +现在 `CGContext` 通过扩展来绘制这些路径: ``` extension CGContext { @@ -230,7 +228,7 @@ extension CGContext { } ``` -And we can add our convenience method for creating circles: +我们可以添加用来创建 circle 的便捷方法: ``` extension Path { @@ -240,11 +238,11 @@ extension Path { } ``` -This works just the same as before and requires roughly the same amount of code. But we’ve introduced a boundary that lets us separate two parts of the system. That boundary lets us: +这与之前的运行效果一样,并需要大致相同数量的代码。但我们引入了一个边界,让我们将系统的两个部分分开。这个边界让我们: + +1. 没有模拟测试 -1. Test without a mock - - We don’t need `TestRenderer` anymore. We can verify that a `Drawable` will be drawn correctly testing the values return from its `paths` property. `Path` is `Equatable`, so this is a simple test. + 我们不再需要 `TestRenderer` 了,我们可以通过测试从 `paths` 属性返回的值来验证 `Drawable` 是否可以正确绘制。`Path` 是 `可进行相等比较` 的,所以这是一个简单的测试。 ``` let polygon = Polygon(corners: [(x: 0, y: 0), (x: 6, y: 0), (x: 3, y: 6)]) @@ -256,28 +254,28 @@ let paths: Set = [ XCTAssertEqual(polygon.paths, paths) ``` -2. Insert more steps - - With the value-oriented approach, we can take our `Set` and transform it directly. Say you wanted to flip the result horizontally. You calculate the size and then return a new `Set` with flipped coordinates. - - In the protocol-oriented approach, it would be somewhat difficult to transform our drawing steps. To flip horizontally, you need to know the final width. Since that width isn’t known ahead of time, you’d need to write a `Renderer` that (1) saved all the calls to `move`, `line`, and `arc` and then (2) pass it another `Render` to render the flipped result. - - (This theoretical renderer is creating the same boundary we created with the value-oriented approach. Step 1 corresponds to our `.paths` method; step 2 corresponds to `draw(Set)`.) - -3. Easily inspect data while debugging - - Say you have a complex `Diagram` that isn’t drawing correctly. You drop into the debugger and find where the `Diagram` is drawn. How do you find the problem? - - If you’re using the protocol-oriented approach, you’ll need to create a `TestRenderer` (if it’s available outside your tests) or you’ll need to use a real renderer and actually render somewhere. Inspecting that data will be difficult. - - But if you’re using the value-oriented approach, you only need to call `paths` to inspect this information. Debuggers can display values much more easily than effects. - - -The boundary adds another semantic layer, which opens up additional possibilities for testing, transformation, and inspection. - -I’ve used this approach on a number of projects and found it immensely helpful. Even with a simple example like the one given here, values have a number of benefits. But those benefits become much more obvious and helpful when working in larger, more complex systems. - -If you’d like to see a real world example, check out [PersistDB](https://github.com/PersistX/PersistDB), the Swift persistence library I’ve been working on. The public API presents `Query`s, `Predicate`s, and `Expression`s. These are reduced to `SQL.Query`s, `SQL.Predicate`s, and `SQL.Expression`s. And each of those is reduced to a `SQL`, a value representing some actual SQL. +2. 插入更多步骤 + + 使用值类型导向方法,我们可以使用 `Set` 并直接对其进行转换。假设你想要水平翻转结果。你只要计算尺寸,然后返回一个新的 `Set` 翻转坐标即可。 + + 在面向协议的方法中,绘制转换步骤会有些困难。如果想要水平翻转,你需要知道最终宽度。由于预先不知道这个宽度,你需要实现一个 `Renderer`,(1)它保存了所有的方法调用(`move`、`line` 和 `arc`)。(2)然后将其传递给另一个 `Render` 来渲染翻转结果。 + + (这个假设的渲染器创建了我们通过值类型导向方法创建的渲染器相同的边界。步骤 1 对应于 `.paths` 方法;步骤 2 对应于 `draw(Set)`。) + +3. 在调试时轻松检查数据 + + 假设你有一个没有正确绘制的复杂 `Diagram`。你进入调试器并找到绘制 `Diagram` 的位置。你如何定位这个问题? + + 如果你正在使用面向协议的方法,你需要创建一个 `TestRenderer`(如果它在测试之外可用),或者你需要使用真实的渲染器并实际渲染某一部分。数据检查将变得很困难。 + + 但如果你使用值类型导向方法,你只需要调用 `paths` 来检查这些信息。相对于渲染效果,调试器更容易显示数据值。 + + +边界增加了另一个语义,为测试、转换和检查带来了更多的可能性。 + +我已经在很多项目中使用了这种方法,并发现它非常有用。即使是像本文给出的简单例子,值类型也具有很多好处。但在更大、更复杂的系统中,这些好处将变得更加明显和有用。 + +如果你想看一个真实的例子,请查看 [PersistDB](https://github.com/PersistX/PersistDB)。我一直在研究的 Swift 持久存储库。公共 API 提供 `Query`、`Predicate` 和 `Expression`。它们是 `SQL.Query`、`SQL.Predicate` 及 `SQL.Expression` 的简化版。它们中的每一个都会被转换成一个 `SQL`(一个代表一些实际 SQL 的值)。 > 如果发现译文存在错误或其他需要改进的地方,欢迎到 [掘金翻译计划](https://github.com/xitu/gold-miner) 对译文进行修改并 PR,也可获得相应奖励积分。文章开头的 **本文永久链接** 即为本文在 GitHub 上的 MarkDown 链接。 From 6506f480262a9137eb78c998deca2e69c42b45f8 Mon Sep 17 00:00:00 2001 From: Yuqi Date: Wed, 2 Jan 2019 10:03:34 +0800 Subject: [PATCH 03/54] =?UTF-8?q?=E4=B8=BA=E4=BB=80=E4=B9=88=E6=88=91?= =?UTF-8?q?=E6=94=BE=E5=BC=83=E4=BA=86=20React=20=E8=80=8C=E8=BD=AC?= =?UTF-8?q?=E5=90=91=20Vue=20(#4924)?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * Update why-you-should-leave-react-for-vue-and-never-use-it-again.md * Update why-you-should-leave-react-for-vue-and-never-use-it-again.md * Update why-you-should-leave-react-for-vue-and-never-use-it-again.md --- ...ve-react-for-vue-and-never-use-it-again.md | 120 +++++++++--------- 1 file changed, 60 insertions(+), 60 deletions(-) diff --git a/TODO1/why-you-should-leave-react-for-vue-and-never-use-it-again.md b/TODO1/why-you-should-leave-react-for-vue-and-never-use-it-again.md index 8c08fc2c279..be75c0250db 100644 --- a/TODO1/why-you-should-leave-react-for-vue-and-never-use-it-again.md +++ b/TODO1/why-you-should-leave-react-for-vue-and-never-use-it-again.md @@ -2,78 +2,78 @@ > * 原文作者:[Gwenael P](https://blog.sourcerer.io/@gwenael.pluchon?source=post_header_lockup) > * 译文出自:[掘金翻译计划](https://github.com/xitu/gold-miner) > * 本文永久链接:[https://github.com/xitu/gold-miner/blob/master/TODO1/why-you-should-leave-react-for-vue-and-never-use-it-again.md](https://github.com/xitu/gold-miner/blob/master/TODO1/why-you-should-leave-react-for-vue-and-never-use-it-again.md) -> * 译者: -> * 校对者: +> * 译者:[EmilyQiRabbit](https://github.com/EmilyQiRabbit) +> * 校对者:[luochen1992](https://github.com/luochen1992),[Moonliujk](https://github.com/Moonliujk) -# Why I left React for Vue. +# 为什么我放弃了 React 而转向 Vue。 ![](https://cdn-images-1.medium.com/max/2000/1*QIg6vEjZmT5YMVKU5Rxr2A.png) -[Today’s random Sourcerer profile: https://sourcerer.io/posva] +[今日随机的开源者个人简介:https://sourcerer.io/posva] -Recently, Vue.js gained more stars that React on Github. The popularity of this framework is soaring these days, and as it is not backed by a company like Facebook (React) or Google (Angular), it is surprising to see it rising out of nowhere. +最近,在 Github 上 Vue.js 比 React 获得更多的 star。该框架受欢迎程度近期飙升,并且由于它并没有类似于 Facebook(React)或者 Google(Angular)这样的公司支持,看到它从不知名的地方崛起,着实让人惊讶。 -### Evolution of web development +### 网页研发的进化 -Back in the old good days, in the 90’s, when we wrote a website, it was pure HTML, with some poor CSS styling. What was good is that it was pretty easy. What was bad is that we were lacking a lot of features. +回顾过去的光辉岁月,在 90 年代时,我们写网页,就是纯 HTML,以及一些简单的 CSS 样式。好处就是非常简单。但缺点是许多功能的缺失。 -Then came PHP, and we were happy to write things like : +然后有了 PHP,能写像这样的代码,我们很开心了: ![](https://cdn-images-1.medium.com/max/800/1*0QbOoPYacDrJjETxbhHMmw.jpeg) -source : [https://www.webplanex.com/blog/php-good-bad-ugly-wonderful/](https://www.webplanex.com/blog/php-good-bad-ugly-wonderful/) +来源:[https://www.webplanex.com/blog/php-good-bad-ugly-wonderful/](https://www.webplanex.com/blog/php-good-bad-ugly-wonderful/) -This nowadays looks terrible, but at that time it was an amazing improvement. This is what it’s all about : using new languages, frameworks, and tools, that we are a fan, until the day a competitor does something much better. +这些在现在看来简直可怕,但是在那个时候,已经是很惊人的进步了。这是它的全部意义所在:使用新的语言,框架,还有工具,我们热衷于此,直到竞争对手做得远远更好的那一天。 -Before React became popular I used Ember. I then switched to React and I felt enlightened by its wonderful way of making us develop everything as web components, its virtual DOM and its efficiency in rendering. Not everything was perfect for me but it was a huge improvement in the way I was coding. +在 React 如此流行之前,我使用的是 Ember。然后我转到了 React,它将我们所需要的开发抽象为网页组件,它使用虚拟 DOM 并且高效渲染,这些非常棒的方法都让我觉得眼前一亮。虽然对于我来说并不是十全十美的,但是相比于之前我写代码的方式,它已经有了巨大的进步。 -**Then I decided to give Vue.js a try and I won’t go back to React.** +**之后,我决定尝试 Vue.js,再之后我将不会回头使用 React 了。** -React does not completely suck, but I found it cumbersome, hard to master, and at some point the code I was writing did not look logical to me at all. It was such a relief to discover Vue and how it solves some of its older brother’s problems. +虽然 React 不是糟糕透了,但我发现它很笨重,难以管理,并且有时候我写的代码对于我来说看上去简直毫无逻辑可言。发现了 Vue 并知道了它是如何解决了它老哥 React 的一些问题,对我来说真是一种解脱。 -Let me explain why. +让我来解释一下原因吧。 -### Performance +### 性能 -First, let’s talk about size. +首先,我们来讨论一下体积。 -As every web developer is working according to limited network bandwidth, It is very important to limit the size of our webpages. The smaller the web page, the better. This is even more important now than it was a few years ago, with the rise of mobile browsing. +由于所有 web 开发者的工作都需要考虑网络带宽,所以限制网页大小就很重要。网页越小越好。现在,随着移动端浏览量快速上升,这一点甚至比在之前几年要更加重要。 -It is really difficult to evaluate and compare the sizes of React and Vue. If you want to build a website with React, you will need React-dom. Also, they come with different sets of features. But Vue is famous by its lightweight size and you will probably result having way less dependancy weight to carry with Vue. +事实上很难评估和比较 React 和 Vue 的体积大小。如果你想要使用 React 构建网站,你也将会使用 React-dom。同样,它们有一系列不同的功能。但是 Vue 以轻量闻名,同时你也可能会因为使用了 Vue 而减少依赖包的大小。 -On raw performance, here are some figures: +关于原生性能,这里有一些数据: ![](https://cdn-images-1.medium.com/max/800/1*8apjMq6HAKJzu5mkeryLmA.png) -source : [https://www.stefankrause.net/js-frameworks-benchmark6/webdriver-ts-results/table.html](https://www.stefankrause.net/js-frameworks-benchmark6/webdriver-ts-results/table.html) +数据来源:[https://www.stefankrause.net/js-frameworks-benchmark6/webdriver-ts-results/table.html](https://www.stefankrause.net/js-frameworks-benchmark6/webdriver-ts-results/table.html) ![](https://cdn-images-1.medium.com/max/800/1*LahiEV9jeiJDNj3AXcSvyg.png) -source : [https://www.stefankrause.net/js-frameworks-benchmark6/webdriver-ts-results/table.html](https://www.stefankrause.net/js-frameworks-benchmark6/webdriver-ts-results/table.html) +数据来源:[https://www.stefankrause.net/js-frameworks-benchmark6/webdriver-ts-results/table.html](https://www.stefankrause.net/js-frameworks-benchmark6/webdriver-ts-results/table.html) -As you can see, this benchmark details that a Vue.js web app will take less memory and run faster than one made with React. +如你所见,这个基准测试详细地说明了,相比于使用 React,使用 Vue.js 的网页应用程序占用的内存更少,运行速度也更快。 -Vue will provide you a faster rendering pipeline which will help you build complex webapps. You will feel less concerned about optimising code as your projects will render more efficiently, letting you spend time on features that matter for your project. Mobile performance is here as well and you will rarely have to adapt an algorithm to make it render smoothly on phones. +Vue 将会为你提供更快的渲染管线,帮助你构建复杂的网页应用。由于你的项目能被更高效的渲染,你就不用那么顾虑代码优化,这能够让你能腾出时间用于项目的更重要的功能上。移动端性能也是如此,你将不怎么需要调整算法来保证手机上的平滑渲染。 -> You don’t have to compromise between size and performance when choosing Vue.js over React. You have both of them. +> 当你放弃 React 而选择了 Vue.js,你就不需要在应用大小和性能之间折中。你将能兼顾应用大小和性能。 -### Learning curve +### 学习曲线 -Learning React was quite OK. It was good to see a library built entirely around web components. React core is pretty well done and stable, but I had a lot of problems dealing with the advanced router configuration. What’s the actual thing with all those router versions ? There is 4 until now ( + React-router-dom), and I ended up using v3. It is pretty easy to deal with version selection once you are used to the stack, but when you’re learning, it is a pain. +学习 React 是可以的。了解一个完全围绕网页组件而构建的库是很好的事情。React 的核心是完美且稳固的,但是在我处理高级路由配置的时候我遇到了很多问题。所有这些路由版本的实际情况是什么?目前已经到了第四版(+ React-router-dom),我最终使用的是第三版。只要你习惯了这个技术栈,选择版本其实很容易,但是学习的过程却很痛苦。 -#### Third party libraries +#### 第三方库 -Most of the recent frameworks share a common design philosophy : A simple core, without a lot of features, and you can enrich them by setting up other libraries on top of it. Building a stack can be really straighforward, with the condition that additional libraries can be integrated without difficulties, and in the same way for each one of them. It is very important for me that this step should be as straighforward as possible. +大多数近代框架都普遍遵从一个原理:内核简单,没有太多功能,你可以通过在它们之上配置其他的库,来丰富它们。构建一个技术栈可以非常简单,条件是可以毫不费力的集成其他库,并用相同的方式为每个库集成。对我来说至关重要是,这一步应该尽可能的简单明了。 -Both React and Vue have a tool that helps you to kickstart projects configured with additional tools. Available libraries can be pretty hard to master in the React ecosystem, as there are sometimes several libraries to solve the same problem. +React 和 Vue 都有工具,用来帮助你使用附加的工具开启项目配置。在 React 生态系统中,可用库很难掌握,因为有时候很多个库解决的是同一个问题。 -On this part, React and Vue did pretty well. +在这部分,React 和 Vue 都很出色。 -#### Code clarity +#### 代码清晰度 -In my opinion, React is pretty bad. JSX, the built-in syntax to write html code, is an abomination in terms of clarity. +我的观点是,React 糟糕透了。JSX,写 html 代码的内建语法,在清晰度方面是很让人头疼的。 -This is one of the common way to write a “if” condition in JSX : +这是一个使用 JSX 写 “if” 条件句的常规方法: ``` ( @@ -87,7 +87,7 @@ This is one of the common way to write a “if” condition in JSX : ); ``` -And this is in vue : +这则是 vue 的写法: ``` ``` -You’ll run into other problems. Trying to call methods from component templates will often result having no access to “this”, resulting in that you have to bind them manually : `
` . +你将会遇到其他问题。在组件模版中调用方法经常会遇到无法获取 “this” 的问题,结果是你需要手动绑定:`
`。 ![](https://cdn-images-1.medium.com/max/800/1*AmMOMOzb_rAfA7MOUPWSfA.gif) -At some point things are getting so illogic with React… +在某些时候,使用 React 让事情变得非常不合逻辑... -Assuming that you’re probably going to write a lot of conditionals in your app, the JSX way is terrible. That way of writing loops looks like a joke to me. Sure you can change the templating system, remove JSX from a React stack, or use JSX with Vue, but as it’s not the first thing you are going to do when learning a framework, it’s not the point. +假设你需要在应用中写很多条件判断语句,用 JSX 的方法就很不好。而用这个方法来写循环的话,对我来说简直像看笑话。当然你可以改变模版系统,把 JSX 从 React 技术栈中移除,或者和 Vue 一起使用 JSX,但是当你学习一个框架的时候,这不是首先要做的事情,这不是解决问题的重点。 -Another point, you won’t have to use **setState** or any equivalent with Vue. You will still have to define all the state properties in a “data” method, but if you forget, you will see a notice in the console. The rest is automatically handled internally, just change the value in your component as you do in a regular Javascript object. +另一方面,使用 Vue 你不需要使用 **setState** 或者其他类似的东西。你仍然需要在一个 “data” 方法中定义所有状态属性,如果你忘了,你将会在控制台看到提示。余下的部分将会自动的在内部被处理,你只需要像操作常规 Javascript 对象那样,在组件中修改属性的值。 -You are going to run into a lot of code errors with React. It will make your learning process slow even if the underlying principles are actually simple. +使用 React 你将会遇到很多代码错误。所以尽管潜在的规则其实非常简单,你的学习进程还是会非常慢。 -Concerning conciseness, a code written with Vue is way smaller than one written with other frameworks. This is actually the best part of the framework. Everything is simple, and you will find yourself writing complex features with only few understandable lines, while with other frameworks, it will take you 10%, 20%, sometimes 50% more lines. +考虑到简明性,使用 Vue 写的代码要比使用其他框架更加轻量。这其实是 Vue 框架最棒的部分。所有的东西都很简单,你将会发现你能够仅使用几行易懂的代码,就写出很复杂的功能,而使用其他框架,将会多使用 10%,20%,有时候会是 50% 更多的代码量。 -You don’t need to learn a lot either. Everything is pretty straightforward. Writing Vue.js code gets you pretty close to the conceivable minimal way of implementing your thoughts. +你也不需要额外学习什么。所有的内容都很简明直接。写 Vue.js 代码可以让你非常靠近实现你想法的最简路径。 -This ease of use makes Vue, a really good tool if you want to adapt and communicate. Either you want to change other parts of your stack, enroll more people in your team for an emergency situation, or do some experimentations on your products, it will definitely take less time, and thus money. +这样易用性使得 Vue 成为了一个很好的帮助你适应和交流的工具。不管是你想要修改你项目技术栈的其他部分,由于紧急情况为团队招募更多的人,还是在产品上施展实验,它都绝对能让你花费更少的时间和金钱。 -Time estimations are made pretty easy because implementing a feature does not require much more than what the developers estimate, leading to a small number of possible confusions, mistakes or oversights. And the small number of concepts to understand makes communicating with project managers easier. +时间预算也非常容易,因为实现一个功能的时间不需要比开发者估计的多很多,结果就是更少可能的引起的冲突,错误或疏忽。要理解的概念很少,这使得与项目经理的沟通变得更加容易。 -### Conclusion +### 总结 -Whether speaking on size, performance, simplicity, or a learning curve; embracing Vue.js definitely looks like a good bet nowadays, making you save both time and money. +不管是体积,性能,简易性,或者学习曲线哪个方面,拥抱 Vue.js 吧,这绝对是当前非常好的选择,让你能够节省时间和金钱。 -Its weight and performance also allows you to have a web project with 2 frameworks at the same time (Angular and Vue for instance), and this will allow you an easy transition to Vue. +它的重量和性能也让你能够有一个同时使用两个框架(比如 Angular 和 Vue)的网络项目,这将能让你轻松的转型到 Vue。 -Concerning the community and the popularity, even if Vue has more stargazers now, we can’t say that it has reached React’s popularity yet. But the fact that a framework became so popular without it being backed by a huge IT company is definitely good to see. Its market share has quickly grown from an unknown project to one of the biggest competitors in front-end development. +考虑到社区和用户量,现在即使是 Vue 也有了更多人给的 star,但我们也不能说它已经赶上了 React。但是事实上一个框架没有 IT 巨头公司在后面支持却如此流行,它也是绝对足够好而值得一看的。在前端开发的领域,它的市场占比已经很快的从一个不知名的项目成长为一个很强的竞争者。 -The number of modules built on top of Vue is soaring and if you don’t find a specific one to suit your needs, you will not spend a long time developing what you need. +建立在 Vue 基础上的模块正在激增,而如果你没有找到你个能够满足你需求的,你也不会花太长的时间去开发出你所需要的那个。 -This framework makes understanding, sharing, and editing easy. Not only will you feel comfortable digging into other’s code, but you will also be able to edit their implementations easily. In a matter of months, Vue made me feel way more confident when dealing with sub-projects and external contributions to projects. It made me save time, focus on what I really wanted to design. +这个框架让理解,分享和编辑都变得容易。你在研究其他人的代码的时候不仅会觉得很舒适,而且还能很容易的修改他们的实现方法。几个月的时间,Vue 让我在处理项目的子项目和外部贡献者的时候自信了很多。它为我节省了时间,让我能专注于我真正想要设计的事物。 -React was designed to be used with helpers such as setState, and you **will** forget to use them. You will struggle writing templates, and the way you write them will make your project hard to understand and maintain. +React 被设计为需要使用像 setState 这样的帮助方法,你**将会**忘记去用他们。你在写模版的时候会很痛苦,这样写将会让你的项目很难被理解,很难维护。 -Concerning the use of those frameworks in a large scale project, with React you will need to master other libraries and to train your team to use them. With all the related problems (X does not like this lib, Y don’t understand that). Vue stacks are simpler for the greater good of your team. +关于在大型项目中使用这些框架,如果使用 React 你将会需要管理其他库并且训练你的团队也去使用。这会导致很多连带的问题(X 不喜欢这个库,Y 不懂那个库)。Vue 技术栈则简单很多,对团队大有好处。 -> As a developer, I feel happy, confident and free. As a project manager, I can plan and communicate with my teams more easily. And as a freelancer, I save time and money. +> 作为开发者,我感到愉悦自信和自由。作为项目经理,我能和我的团队更加轻松的计划和交流。作为自由职业者,我节省了时间和金钱。 -There are still some needs that are not yet covered by Vue (especially if you want to build native applications). React performs pretty well in that field, but Evan You and the Vue team is already working hard on that. +Vue 依旧有很多没有覆盖到的需求(特别是如果你想要构建本地应用)。在这个领域 React 的性能很好,但是 Evan You 和 Vue 团队也已经在这方面作出努力了。 -> React is popular because of some good concepts and the way these are implemented. But looking back, it looks like a bunch of ideas in an ocean of mess. +> React 很流行,因为它的一些很好的观念以及观念实现的方法。但是回头看看,它却看起来像在一个混乱海洋里的一堆点子。 -Writing React code is about dealing with workarounds all day long (cf “code clarity” part), struggling on code that actually make sense, to finally hack it and produce a really unclear solution. This solution will be hard to read when you come back to it a few months later. You will work harder to release your project, and it will be hard to maintain, have errors and need a lot of training to be modified. +写 React 代码就是整天在寻找解决办法(可以比照“代码清晰度”那部分),在已经有意义的代码上挣扎,最后破解了它并产生了一个真的很不明确的方案。这个方案在你几个月后回头重新看它的时候将会非常难以阅读。为了发布项目你需要更努力的工作,并且它还会很难维护,会出错,并且需要很多的学习才能修改。 -These are negative aspects nobody wants in their projects. Why would you still run into these troubles? Community and third party libraries? So much pain that could be avoided for few points that are becoming less problematic everyday. +没人想要这些缺点在自己的项目里出现。为什么你还要继续面对这些问题呢?社区和第三方库?每天都变得不那么成问题的几点,却可以让你避免这么多痛苦。 -After years of dealing with frameworks that in some cases made my life easier, but in some others, complicates a lot the way of implementing a feature, Vue is a relief to me. Implementations are very close to how I plan to develop features, and while developing, there is nearly nothing particular to think about, apart of what you really want to implement. It looks very close to the native Javascript logic (no more **setState**, special ways to implement conditionnals or pieces of algorithms). You just code as you want. It is fast, safe, and it makes you happy :D. I am glad to see Vue being adopted more and more by frontend developers and companies, and I hope it will soon be the end of React. +这么多年一直和框架打交道,它们有时候让我的生活更轻松,有时候实现一个功能却复杂很多,这之后 Vue 对我来说是一种解脱。实现方法和我计划如何开发功能很接近,然后开发过程中,除了你真正想要实现的东西,几乎没有什么特别需要思考的。它和原生的 Javascript 逻辑非常相近(不会有 **setState**,实现条件语句的特别方式以及算法)。你只需要随心所欲的写代码。它快速,安全,让你愉快 :D。我很高兴看到 Vue 正在被更多的前端开发者和公司接纳,我希望它能够很快终结 React。 -_Disclaimer : This article is opinionated and shows my point of view at the moment. As technologies evolve, they will be subject to change (for the better or the worse)._ +**免责声明:这篇文章仅代表我个人此刻的看法。随着科技的进步,它们也将会改变(更好或者更坏)。** -[EDIT] Changed the title, according to a suggestion of [James Y Rauhut](https://medium.com/@seejamescode?source=post_header_lockup). +[编辑] 根据 [James Y Rauhut](https://medium.com/@seejamescode?source=post_header_lockup) 的意见,修改了题目。 -[EDIT] Changed the paragraph speaking about framework size comparison. As pointed out, it is really difficult to evaluate and will always end up creating arguments between people and their architectures, based on their needs. +[编辑] 修改了谈论关于比较框架大小的段落。正如文章指出的,评估很困难,并且基于需求不同,也经常会在人和框架之间引起争论。 > 如果发现译文存在错误或其他需要改进的地方,欢迎到 [掘金翻译计划](https://github.com/xitu/gold-miner) 对译文进行修改并 PR,也可获得相应奖励积分。文章开头的 **本文永久链接** 即为本文在 GitHub 上的 MarkDown 链接。 From db21d22f44a328e7a8d13c8a847951dccdce3052 Mon Sep 17 00:00:00 2001 From: Hopsken Date: Wed, 2 Jan 2019 10:14:25 +0800 Subject: [PATCH 04/54] =?UTF-8?q?=E8=89=B2=E5=BD=A9=E6=97=A0=E9=9A=9C?= =?UTF-8?q?=E7=A2=8D=E6=80=A7=E4=BA=A7=E5=93=81=E8=AE=BE=E8=AE=A1=E6=8C=87?= =?UTF-8?q?=E5=8D=97=20(#4930)?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * finish translation * Update a-guide-to-color-accessibility-in-product-design.md * Update a-guide-to-color-accessibility-in-product-design.md * 根据校对意见修改 * Update a-guide-to-color-accessibility-in-product-design.md --- ...o-color-accessibility-in-product-design.md | 67 +++++++++---------- 1 file changed, 31 insertions(+), 36 deletions(-) diff --git a/TODO1/a-guide-to-color-accessibility-in-product-design.md b/TODO1/a-guide-to-color-accessibility-in-product-design.md index d20d7b0d8a2..2f8d7828df1 100644 --- a/TODO1/a-guide-to-color-accessibility-in-product-design.md +++ b/TODO1/a-guide-to-color-accessibility-in-product-design.md @@ -2,81 +2,76 @@ > * 原文作者:[InVision](https://medium.com/@InVisionApp?source=post_header_lockup) > * 译文出自:[掘金翻译计划](https://github.com/xitu/gold-miner) > * 本文永久链接:[https://github.com/xitu/gold-miner/blob/master/TODO1/a-guide-to-color-accessibility-in-product-design.md](https://github.com/xitu/gold-miner/blob/master/TODO1/a-guide-to-color-accessibility-in-product-design.md) -> * 译者: -> * 校对者: +> * 译者:[Hopsken](https://hopsken.com) +> * 校对者:[Ivocin](https://github.com/Ivocin) -# A guide to color accessibility in product design +# 色彩无障碍性产品设计指南 -## There’s a lot of talk about accessible design, but have you ever thought about color accessibility? +## 关于无障碍设计的讨论有很多,但你是否想过色彩的无障碍设计? -Recently, a client brought in a project with very specific, complex implementations of an accessible color system. This opened my eyes not only to how important this subject is, but also how much there is to learn. +最近,有一个客户带来了一个项目,该项目具有非常具体、复杂的无障碍色彩体系。这让我意识到这个课题是如此重要,其内容又是如此丰富。 ![](https://cdn-images-1.medium.com/max/800/1*U3GwUaniqzo5nZYd2LkaUA.png) -This story is by [Justin Reyna](https://twitter.com/justinreyreyna) +图片:[Justin Reyna](https://twitter.com/justinreyreyna) -Let’s learn how to go color accessible using the design principles you already know. +让我们来学习如何使用你已经知道的设计原则来进行色彩无障碍设计。 -### Why’s accessibility so important? +### 为什么无障碍性如此重要? -[Accessibility](https://invisionapp.com/inside-design/accessibility-for-developers/) in digital product design is the practice of crafting experiences for all people, including those of us with visual, speech, auditory, physical, or cognitive disabilities. As designers, developers, and general tech people, we have the power to create a web we’re all proud of: an inclusive web made for and consumable by all people. +数字产品的[无障碍设计](https://invisionapp.com/inside-design/accessibility-for-developers/)旨在为所有人提供精致的使用体验,这些人包括有视觉、语言、听觉、身体或者认知障碍的人。作为设计师、开发者以及所有科技行业从业人员,我们有能力去创造一个我们所有人都为之骄傲的网络 — 一个为所有人创造,服务于所有人,不排斥任何群体的网络。 -Also, not creating accessible products is just rude, so don’t be rude. +而且,做出不具备无障碍性的产品是种很粗鲁的行为。所以,请保持礼貌。 -[Color accessibility](https://invisionapp.com/inside-design/guide-web-content-accessibility/) enables people with visual impairments or color vision deficiencies to interact with digital experiences in the same way as their non-visually-impaired counterparts. In 2017, [The World Health Organization](http://www.who.int/en/news-room/fact-sheets/detail/blindness-and-visual-impairment) estimated that roughly 217 million people live with some form of moderate to severe vision impairment. That statistic alone is reason enough to design for accessibility. +[色彩无障碍设计](https://invisionapp.com/inside-design/guide-web-content-accessibility/)使得有视力障碍或者色觉缺陷的人能够获得与正常人同样的数字体验。2017 年,[WHO(世界卫生组织)](http://www.who.int/en/news-room/fact-sheets/detail/blindness-and-visual-impairment)估计,大约有 2.17 亿人患有某种形式的中度至重度视力障碍。仅凭这个数据,我们就有足够的理由去做无障碍设计。 -> _“Not creating accessible products is just rude, so don’t be rude.”_ +> **“做出不具备无障碍性的产品是种很粗鲁的行为。所以,请保持礼貌。”** -Apart from accessibility being an ethical best practice, there are also potential legal implications for not complying with regulatory requirements around accessibility. In 2017, plaintiffs filed at least [814 federal lawsuits](https://www.adatitleiii.com/2018/01/2017-website-accessibility-lawsuit-recap-a-tough-year-for-businesses/) about allegedly inaccessible websites, including a number of class actions. Various organizations have sought to establish accessibility standards, most notably the United States Access Board (Section 508) and the World Wide Web Consortium (W3C). Here’s an overview of these standards: +无障碍设计不仅仅只是道德上的最佳实践,如果不服从关于无障碍性的监管要求,还会有潜在的法律隐患。在 2017 年,联邦法院收到过至少 [814 条](https://www.adatitleiii.com/2018/01/2017-website-accessibility-lawsuit-recap-a-tough-year-for-businesses/)关于网站涉嫌未提供无障碍访问的诉讼,包括为数不少的集体诉讼。各个组织都在努力建立无障碍性标准,其中最著名的是美国无障碍委员会(United States Access Board,Section 508)和 W3C 组织(World Wide Web Consortium)。以下是这些规范的概述: -* **Section 508:** 508 compliance refers to Section 508 of the Rehabilitation Act of 1973. You can read the in-depth ordinance [here](https://www.section508.gov/manage/laws-and-policies), but to summarize, Section 508 requires that your site needs to be accessible if you are a federal agency or create sites on behalf of a federal agency (like contractors). -* **W3C:** The World Wide Web Consortium (W3C) is an international, voluntary community that was established in 1994 and develops open standards for the web. The W3C outlines their guidelines for web accessibility within [WCAG 2.1](https://www.w3.org/TR/WCAG21/), which is essentially the gold standard for web accessibility best practices. +* **Section 508**:508 号法令援引自 1973 年康复法案(Rehabilitation Act of 1973)的第 508 节。你可以在[这里](https://www.section508.gov/manage/laws-and-policies)找到详细的说明。总而言之,根据 508 法令,如果你隶属于任何联邦机构,或者为联邦机构构建网站(例如:承包商),那么你的网站必须具有无障碍性。 +* **W3C**:W3C 组织是一个国际性的自发性组织,于 1994 年建立,为互联网提供开发性规范。在 [WCAG 2.1](https://www.w3.org/TR/WCAG21/) 中,W3C 概述了它们关于互联网无障碍性的指导方针。这基本上就是互联网无障碍设计的金科玉律。 -### Ensuring your product is color-accessible +### 确保你的产品具备色彩无障碍性 -Accounting for accessibility early on in a product’s life cycle is best — it reduces the time and money you’ll spend to make your products accessible retroactively. Color accessibility requires a little up-front work when selecting your product’s color palette, but ensuring your colors are accessible will pay dividends down the road. +最好是在产品生命周期的早期就考虑无障碍性 —— 这在以后可以帮您省下不少时间和金钱。为了保证色彩无障碍性,在你为产品选择主题色彩时就要考虑好,随着产品发展下去,你会发现这么做的好处。 -Here are some quick tips to ensure you’re creating color-accessible products. +这里给出一些小技巧来帮助你打造色彩无障碍性产品。 -#### Add enough contrast +#### 提供足够的对比度 -To meet [W3C’s minimum AA rating](https://www.w3.org/TR/UNDERSTANDING-WCAG20/visual-audio-contrast-contrast.html), your background-to-text contrast ratio should be at least 4.5:1. So, when designing things like buttons, cards, or navigation elements, be sure to check the contrast ratio of your color combinations. +为了达到 [W3C 标准 AA 评级](https://www.w3.org/TR/UNDERSTANDING-WCAG20/visual-audio-contrast-contrast.html)最低限度,背景与文字的对比度至少为 4.5:1。因此,在设计按钮、卡片或者导航元素之类时,记得检查色彩组合的对比度是否符合要求。 ![](https://cdn-images-1.medium.com/max/800/1*PZXhnoxM0Sza0AJWp8G1BA.png) -There are lots of tools to help you test the accessibility of color combinations, but the ones I’ve found most helpful are [Colorable](https://colorable.jxnblk.com/ffffff/6b757b) and [Colorsafe](http://colorsafe.co/). I like Colorable because it has sliders that allow you to adjust Hue, Saturation, and Brightness in real time to see how it affects the accessibility rating of a particular color combination. +有很多工具可以帮助你检查色彩组合的无障碍性,我个人认为最好用的是 [Colorable](https://colorable.jxnblk.com/ffffff/6b757b) 和 [Colorsafe](http://colorsafe.co/)。我之所以喜欢 Colorable 是因为你可以通过使用滑动条来调整色相、饱和度和明度,它会实时显示出你的调整将如何影响特定颜色组合的无障碍性评分。 -#### Don’t rely solely on color +#### 不要单纯依赖颜色 -You can also ensure accessibility by making sure you don’t rely on color to relay crucial system information. So, for things like error states, success states, or system warnings, be sure to incorporate messaging or iconography that clearly calls out what’s going on. +为了保证无障碍性,确保你没有完全依赖颜色来展示系统不同层级的关键信息。因此,对于错误状态、成功状态或者系统警告等,诸如此类,确保同时使用文字或者图标来清晰地展示发生了什么。 ![](https://cdn-images-1.medium.com/max/800/1*gmsRDSNDAzUqs-SG-D5P4Q.png) -Also, when displaying things like graphs or charts, giving users the option to add texture or patterns ensures that those who are colorblind can distinguish between them without having to worry about color affecting their perception of the data. [Trello](https://www.trello.com/) does a great job of this with their [Colorblind-Friendly Mode](https://twitter.com/trello/status/543420024166174721?lang=en). +除此以外,当展示图片、表格之类时,允许用户选择是否加入纹理或图案。确保色盲用户能够准确地分辨出它们,而不用担心颜色会影响他们对数据的理解。[Trello](https://www.trello.com/) 在这上面做得很棒,它特别提供了[色盲友好模式](https://twitter.com/trello/status/543420024166174721?lang=en)。 ![](https://cdn-images-1.medium.com/max/800/1*D6PDBf8Y7YNof6Fkh9X5gQ.png) -### Focus state contrast +### 聚焦(Focus)状态对比度 -Focus states help people to navigate your site with a keyboard by giving them a visual indicator around elements. They’re helpful for people with visual impairments, people with motor disabilities, and people who just like to navigate with a keyboard. +当使用键盘浏览站点时,聚焦状态可以通过在元素周围显示视觉引导来帮助人们在页面上导航。这对有视觉缺陷、运动障碍,以及单纯喜欢用键盘导航的人群会很有帮助。 -All browsers have a default focus state color, but if you plan on overriding that within your product, it’s crucial to ensure you’re providing enough color contrast. This ensures those with visual impairments or color deficiencies can navigate with focus states. +所有浏览器都有一个默认的聚焦状态颜色,但是如果你打算在你的产品上覆盖掉它,那么请务必确保你有提供足够的色彩对比度。这使得有视力障碍或色觉缺陷的人群可以通过聚焦状态在页面内导航。 -#### Document and socialize color system +#### 文档化和推广色彩系统 -Lastly, the most important aspect of creating an accessible color system is giving your team the ability to reference it when needed, so everyone is clear about proper usage. This not only reduces confusion and churn, but also ensures that accessibility is always a priority for your team. In my experience, explicitly calling out the accessibility rating of a specific color combination within a UI Kit or Design System is most effective, especially when socializing that across the team with a tool (like [InVision Craft](https://www.invisionapp.com/craft) or [InVision DSM](https://support.invisionapp.com/hc/en-us/articles/115005685166-Introduction-to-Design-System-Manager)). Here’s an example of how to document background to text color combinations and the accessibility rating of each combination. +最后,创建色彩无障碍系统过程中最关键的一步就是,要让你的团队能够在需要的时候能够查阅它,这样每个人都清楚恰当的用法。这不仅可以减少混乱和滥用,也可以保证在你的团队中无障碍设计永远是个优先事项。根据我的经验,明确地在 UI 套件或设计系统中显示出特定颜色组合的可访问性评级是最有效的,尤其是在通过某个工具(如:[InVision Craft](https://www.invisionapp.com/craft) 或 [InVision DSM](https://support.invisionapp.com/hc/en-us/articles/115005685166-Introduction-to-Design-System-Manager))进行团队间合作时。这里有一个关于如何文档化背景文字颜色组合及其可访问性评级的例子。 ![](https://cdn-images-1.medium.com/max/800/1*N_9UOR4mnJyxJq4Cg071LQ.png) -### Let’s get accessible +### 让我们行动起来 -These are just a few tips to make your product more accessible, but keep in mind, these only relate to color accessibility. To understand accessibility guidelines in detail, I recommend familiarizing yourself with [WCAG 2.1](https://www.w3.org/TR/WCAG21/). While these guidelines can be a bit daunting, there are _tons_ of resources out there to help you along the way, and when in doubt, don’t hesitate to reach out to designers in your area (or via the internet) for help. +这只是一些提高产品无障碍性的小建议。另外,别忘了这只是关于色彩无障碍性的建议。要想详细地了解无障碍设计原则,推荐先熟悉 [WCAG 2.1](https://www.w3.org/TR/WCAG21/) 规范。虽然这些规范看上去有些吓人,但网上有**大量的**的资源可以帮到你。如果遇到困难,不要犹豫,向你身边的(或者网上的)设计师们寻求帮助。 -**Originally published at [_invisionapp.com_](https://www.invisionapp.com/inside-design/color-accessibility-product-design).** - -* [Accessibility](https://medium.com/tag/accessibility?source=post) -* [UX Design](https://medium.com/tag/ux-design?source=post) -* [Desig](https://medium.com/tag/design?source=post) > 如果发现译文存在错误或其他需要改进的地方,欢迎到 [掘金翻译计划](https://github.com/xitu/gold-miner) 对译文进行修改并 PR,也可获得相应奖励积分。文章开头的 **本文永久链接** 即为本文在 GitHub 上的 MarkDown 链接。 From 250d63eefd55391070cb8ffd3ac37358db121049 Mon Sep 17 00:00:00 2001 From: Starrier <1342878298@qq.com> Date: Wed, 2 Jan 2019 10:22:34 +0800 Subject: [PATCH 05/54] =?UTF-8?q?=E6=95=B0=E6=8D=AE=E6=B5=81=E7=9A=84?= =?UTF-8?q?=E4=B8=8D=E5=90=8C=E5=BA=94=E7=94=A8=E5=9C=BA=E6=99=AF=20?= =?UTF-8?q?=E2=80=94=E2=80=94=20Java=20(#4910)?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * Starrier:data-stream.md has been done! * update * Update java-data-streaming.md * Update java-data-streaming.md --- TODO1/java-data-streaming.md | 82 ++++++++++++++++++------------------ 1 file changed, 41 insertions(+), 41 deletions(-) diff --git a/TODO1/java-data-streaming.md b/TODO1/java-data-streaming.md index 6758225b584..4851a469583 100644 --- a/TODO1/java-data-streaming.md +++ b/TODO1/java-data-streaming.md @@ -2,76 +2,76 @@ > * 原文作者:[jenkov.com](http://jenkov.com) > * 译文出自:[掘金翻译计划](https://github.com/xitu/gold-miner) > * 本文永久链接:[https://github.com/xitu/gold-miner/blob/master/TODO1/java-data-streaming.md](https://github.com/xitu/gold-miner/blob/master/TODO1/java-data-streaming.md) -> * 译者: -> * 校对者: +> * 译者:[Starrier](https://github.com/Starriers) +> * 校对者:[DeadLion](https://github.com/DeadLion), [kezhenxu94](https://github.com/kezhenxu94) -# Data Streaming +# 数据流 -- [Data Streaming](#data-streaming) - - [Data Streaming Comes in Many Variations](#data-streaming-comes-in-many-variations) - - [Data Streams Decouple Producers and Consumers](#data-streams-decouple-producers-and-consumers) - - [Data Streaming as Data Sharing Mechanism](#data-streaming-as-data-sharing-mechanism) - - [Persistent Data Streams](#persistent-data-streams) - - [Data Streaming Use Cases](#data-streaming-use-cases) - - [Data Streaming For Event Driven Architecture](#data-streaming-for-event-driven-architecture) - - [Data Streaming For Smart Cities and Internet of Things](#data-streaming-for-smart-cities-and-internet-of-things) - - [Data Streaming For Regularly Sampled Data](#data-streaming-for-regularly-sampled-data) - - [Data Streaming For Data Points](#data-streaming-for-data-points) - - [Records, Messages, Events, Samples Etc.](#records-messages-events-samples-etc) +- [数据流](#数据流) + - [数据流可以有很多变量](#数据流可以有很多变量) + - [数据流可以解耦生产者和消费者](#数据流可以解耦生产者和消费者) + - [数据流作为数据共享机制](#据流作为数据共享机制) + - [持久化数据流](#持久化数据流) + - [数据流用例](#数据流的用例) + - [用于事件驱动架构的数据流](#用于事件驱动架构的数据流) + - [用于智能城市和物联网的数据流](#用于智能城市和物联网的数据流) + - [用于常规数据抽样的数据流](#用于常规数据抽样的数据流) + - [用于数据点的数据流](#用于数据点的数据流) + - [记录、消息、事件和抽样等](#记录消息事件和抽样等) -_Data Streaming_ is a data distribution technique where data producers write data records into an ordered data stream from which data consumers can read that data in the same order. Here is a simple data streaming diagram illustrating a data producer, a data stream and a data consumer: +**数据流**是一种数据分发技术,数据生产者将数据记录写入有序数据流,数据消费者可以从该数据流中以相同的顺序读取数据。这是一张用于说明数据生产者,数据流和数据消费者的简单数据流图: -![Data stream of records with a data producer and consumer.](http://tutorials.jenkov.com/images/data-streaming/data-streaming-introduction-1.png) +![数据生产者和消费者的数据流记录](http://tutorials.jenkov.com/images/data-streaming/data-streaming-introduction-1.png) -## Data Streaming Comes in Many Variations +## 数据流可以有很多变量 -On the surface, data streaming as a concept my look very simple. Data producers store records to a data stream which are read later by consumers. However, under the surface there are many details that affect what your data streaming system will look like, how it will behave, and what you can do with it. +从“表面”上看,数据流是一种很简单的概念。数据生产者将记录存储到数据流中,随后消费者可以从中读取。不过,透过这层表面,我们可以看到还是存在一些细节操作会影响数据流系统的“外观”,这会进而影响它的行为以及你可以进行的动作。 -Each data streaming product makes a certain set of assumptions about the use cases and processing techniques to support. These assumptions leads to certain design choices, which affect what types of stream processing behaviour you can implement with them. This data streaming tutorial examines many of these design choices, and discuss their consequences for you as a user of products based on these design choices. +每个数据流产品都会对用例和处理技术做一定的假设(用于技术支持)。这些假设会导致某些设计选择最后影响你可以用来实现数据流处理行为的类型。这个数据流教程将检查哪些设计选择,并基于这些设计选择讨论他们对用户产品造成的影响。 -## Data Streams Decouple Producers and Consumers +## 数据流可以解耦生产者和消费者 -Data streaming decouple data producers and data consumers from each other. When a data producer simply writes its data to a data stream, the producer does not need to know the consumers that read the data. Consumers can be added and removed independently of the producer. Consumers can also start and stop or pause and resume their consumption without the data producer needing to know about it. This decoupling simplifies the implementation of both data producers and consumers. +数据流将数据生产者和数据消费者相互解耦。当数据生产者将其数据简单写入数据流时,生产者不需要知道读取数据的消费者。消费者可以独立于生产者进行添加和删除。消费者可以在生产者不知情的情况下,启动/停止或暂停并恢复他们的消费。这种解耦简化了数据生产者和使用者的实现。 -## Data Streaming as Data Sharing Mechanism +## 据流作为数据共享机制 -Data streaming is a very useful mechanism to both store and share data in bigger distributed systems. As mentioned earlier, data producers just send the data to the data stream system. Producers do not need to know anything about the consumers. Consumers can be up, down, added and removed without affecting the producer. +数据流是在大型分布式系统中存储和共享数据的一种非常有用的机制。如前所述,数据生产者只需将数据发送至数据流系统。生产者不需要知道任何关于消费者的事情。消费者可以在不影响生产者的情况下,上线、下线、添加或者移除自己。 -Big companies like LinkedIn use data streaming extensively internally. Uber uses data streaming internally too. Many enterprise level companies are adopting, or have already adopted, data streaming internally. So has many startups. +像 LinkedIn 这样的大公司在内部广泛使用数据流。Uber 也在内部使用数据流。许多企业级公司正在采用或已经采用内部数据流。许多初创公司也是如此。 -## Persistent Data Streams +## 持久化数据流 -A data stream can be persistent, in which case it is sometimes referred to as a _log_ or a _journal_. A persistent data stream has the advantage that the data in the stream can survive a shutdown of the data streaming service, so no data records are lost. +数据流是可以持久化的,在这种情况下,它被称为 **log** 或 **journal**。持久化数据流的优点是数据流中的数据可以在数据流服务关闭后“存活”下来,因此数据记录不会被丢失。 -Persistent data streaming services can typically hold larger amounts of historic data than a data streaming service that only holds records in memory. Some data streaming services can even hold historic data all the way back to the first record written to the data stream. Others only hold e.g. a number of days of historic data. +相比于在内存中保存记录的数据流服务相比,持久化数据流服务通常可以保存更多的历史数据。有些数据流保存的历史数据甚至可以追溯到写入数据流的第一条记录。有些只保存部分历史数据。 -In the cases where a persistent data stream holds the full history of records, consumers can replay all these records and recreate their internal state based on these records. In case a consumer discovers a bug in its own code, it can correct that code and replay the data stream to recreate its internal database. +在持久化数据流保存完整历史记录的情况下,消费者可以重复处理所有记录,可以基于这些记录重建它们的内部状态。如果消费者在自己的代码中发现了 BUG,它就可以更正代码然后重现数据流来重建内部数据库。 -## Data Streaming Use Cases +## 数据流用例 -Data streaming is a quite versatile concept which can be used to support many different use cases. In this section I will cover some of the more commonly used use cases for data streaming. +数据流是一个非常通用的概念,它可以用于支持多种不同的用例。在本节中,我将介绍一些更常用的数据流用例。 -### Data Streaming For Event Driven Architecture +### 用于事件驱动架构的数据流 -Data streaming is often used to implement [event driven architecture](http://tutorials.jenkov.com/software-architecture/event-driven-architecture.html). The events are written by event producers as records to some data streaming system from which they can be read by event consumers. +数据流常用于[事件驱动架构](http://tutorials.jenkov.com/software-architecture/event-driven-architecture.html)。事件由事件生产者作为记录写入某些数据流系统, 事件消费者可以从中读取这些事件。 -### Data Streaming For Smart Cities and Internet of Things +### 用于智能城市和物联网的数据流 -Data streaming can also be used to stream data from sensors mounted around a _Smart City_, from sensors inside a _smart factory_ or from other _Internet of Things_ devices. Values, like temperature, pollution levels etc. can be sampled from devices regularly and written to a data stream. Data consumers can read the samples from the data stream when needed. +数据流也可以应用于传输在**智能城市**周围的传感器的数据,用于**智能工厂**内传感器或者来自其他**物联网**设备传感器的流数据。像温度,污染程度等这样的数值可以定期从设备中采样并写入数据流。数据消费者可以在需要时从数据流中读取样本。 -### Data Streaming For Regularly Sampled Data +### 用于常规数据抽样的数据流 -Sensors in a smart city, and Internet of Things devices, are just two examples of data sources which can be regularly sampled and made available via data streaming. But there are many other types of data which can be sampled regularly and streamed. For instance, currency exchange rates or stock prices can be sampled and streamed too. Poll numbers can be sampled and streamed regularly too. +智能城市中传感器和物联网设备只是数据源的两个例子,这些数据源可以定期采样并通过数据流提供。还有许多其他类型的数据可以定期采样并以流形式提供。例如,货币汇率或股票价格也可以抽样和流传输。民意数值也可以定期采样和流式传输。 -### Data Streaming For Data Points +### 用于数据点的数据流 -In the example of poll numbers, you could decide to stream each individual answer to the poll, rather than stream the regularly sampled totals. In some scenarios where totals are made up from individual data points (like polls) it can sometimes make more sense to stream the individual data points rater than the calculated totals. It depends on the concrete use case, and on other factors, like whether the individual data points are anonymous or contains private, personal information which should not be shared. +在民调支持率的事例中,你可以决定每个独立答案将要流向的民意投票流中,而不用流向定期抽样的总数。由独立数据点(如投票)组成总数有时会比计算总数来得更有意义。这取决于具体的用例和其他因素,例如单个数据点是匿名的还是包含不应该共享的私有的个人信息。 -## Records, Messages, Events, Samples Etc. +## 记录、消息、事件和抽样等。 -Data streaming records are sometimes referred to as messages, events, samples, objects and other terms. What term is used depends on the concrete use case of the data streaming, and how the producers and consumers process and react to the data. It will normally be reasonably clear from the use case what term it makes sense to refer to records by. +数据流记录有时被称为消息、事件、样本和其他术语。使用哪个术语取决于数据流的具体用例,以及生产者和消费者对数据的处理和响应方式。通常情况,从用例中可以比较清楚地知道用例引用记录的具体意义。 -It is worth noting, that the use case also influences what a given record represents. Not all data records are the same. An event is not the same as a sampled value, and cannot always be used in the same way. I will touch this in more detail later in this (and / or other) tutorials. +值得注意的是,用例也会影响给定记录所代表的内容。并非所有的数据记录都是相同的。事件与抽象值不一样,不能总是以相同的方式使用。在本教程(和/或者其他教程)中,我将更详细地讨论这一点。 > 如果发现译文存在错误或其他需要改进的地方,欢迎到 [掘金翻译计划](https://github.com/xitu/gold-miner) 对译文进行修改并 PR,也可获得相应奖励积分。文章开头的 **本文永久链接** 即为本文在 GitHub 上的 MarkDown 链接。 From fe788784f17ce97c916e7659d269b966ff75ddca Mon Sep 17 00:00:00 2001 From: LeviDing Date: Wed, 2 Jan 2019 10:30:49 +0800 Subject: [PATCH 06/54] Create getting-the-most-from-the-new-multi-camera-api.md --- ...-the-most-from-the-new-multi-camera-api.md | 318 ++++++++++++++++++ 1 file changed, 318 insertions(+) create mode 100644 TODO1/getting-the-most-from-the-new-multi-camera-api.md diff --git a/TODO1/getting-the-most-from-the-new-multi-camera-api.md b/TODO1/getting-the-most-from-the-new-multi-camera-api.md new file mode 100644 index 00000000000..86e9e2cda99 --- /dev/null +++ b/TODO1/getting-the-most-from-the-new-multi-camera-api.md @@ -0,0 +1,318 @@ +> * 原文地址:[Getting the Most from the New Multi-Camera API](https://medium.com/androiddevelopers/getting-the-most-from-the-new-multi-camera-api-5155fb3d77d9) +> * 原文作者:[Oscar Wahltinez](https://medium.com/@owahltinez?source=post_header_lockup) +> * 译文出自:[掘金翻译计划](https://github.com/xitu/gold-miner) +> * 本文永久链接:[https://github.com/xitu/gold-miner/blob/master/TODO1/getting-the-most-from-the-new-multi-camera-api.md](https://github.com/xitu/gold-miner/blob/master/TODO1/getting-the-most-from-the-new-multi-camera-api.md) +> * 译者: +> * 校对者: + +# Getting the Most from the New Multi-Camera API + +This blog post complements our [Android Developer Summit 2018 talk](https://youtu.be/u38wOv2a_dA), done in collaboration with Vinit Modi, the Android Camera PM, and Emilie Roberts, from the Partner Developer Relations team. Check out our previous blog posts in the series including [camera enumeration](https://medium.com/androiddevelopers/camera-enumeration-on-android-9a053b910cb5), [camera capture sessions and requests](https://medium.com/androiddevelopers/understanding-android-camera-capture-sessions-and-requests-4e54d9150295) and [using multiple camera streams simultaneously](https://medium.com/androiddevelopers/using-multiple-camera-streams-simultaneously-bf9488a29482). + +### Multi-camera use-cases + +Multi-camera was introduced with [Android Pie](https://developer.android.com/about/versions/pie/android-9.0#camera), and since launch a few months ago we are now seeing devices coming to market that support the API like the Google Pixel 3 and Huawei Mate 20 series. Many multi-camera use-cases are tightly coupled with a specific hardware configuration; in other words, not all use-cases will be compatible with every device — which makes multi-camera features a great candidate for [dynamic delivery](https://developer.android.com/studio/projects/dynamic-delivery) of modules. Some typical use-cases include: + +* Zoom: switching between cameras depending on crop region or desired focal length +* Depth: using multiple cameras to build a depth map +* Bokeh: using inferred depth information to simulate a DSLR-like narrow focus range + +### Logical and physical cameras + +To understand the multi-camera API, we must first understand the difference between logical and physical cameras; the concept is best illustrated with an example. For instance, we can think of a device with three back-facing cameras and no front-facing cameras as a reference. In this example, each of the three back cameras is considered a _physical camera_. A _logical camera_ is then a grouping of two or more of those physical cameras. The output of the logical camera can be a stream that comes from one of the underlying physical cameras, or a fused stream coming from multiple underlying physical cameras simultaneously; either way that is handled by the camera HAL. + +Many phone manufacturers also develop their first-party camera applications (which usually come pre-installed on their devices). To utilize all of the hardware’s capabilities, they sometimes made use of private or hidden APIs or received special treatment from the driver implementation that other applications did not have privileged access to. Some devices even implemented the concept of logical cameras by providing a fused stream of frames from the different physical cameras but, again, this was only available to certain privileged applications. Often, only one of the physical cameras would be exposed to the framework. The situation for third party developers prior to Android Pie is illustrated in the following diagram: + +![](https://cdn-images-1.medium.com/max/800/0*jHgc12zW0MnFXf8V) + +Camera capabilities typically only available to privileged applications + +Beginning in Android Pie, a few things have changed. For starters, [private APIs are no longer OK](https://developer.android.com/about/versions/pie/restrictions-non-sdk-interfaces) to use in Android apps. Secondly, with the inclusion of [multi-camera support](https://source.android.com/devices/camera/multi-camera) in the framework, Android has been [strongly recommending](https://source.android.com/compatibility/android-cdd#7_5_4_camera_api_behavior) that phone manufacturers expose a logical camera for all physical cameras facing the same direction. As a result, this is what third party developers should expect to see on devices running Android Pie and above: + +![](https://cdn-images-1.medium.com/max/800/0*xnN-9_1XtmuWq-Lx) + +Full developer access to all camera devices starting in Android P + +It is worth noting that what the logical camera provides is entirely dependent on the OEM implementation of the Camera HAL. For example, a device like Pixel 3 implements its logical camera in such a way that it will choose one of its physical cameras based on the requested focal length and crop region. + +### The multi-camera API + +The new API consists in the addition of the following new constants, classes and methods: + +* `CameraMetadata.REQUEST_AVAILABLE_CAPABILITIES_LOGICAL_MULTI_CAMERA` +* `CameraCharacteristics.getPhysicalCameraIds()` +* `CameraCharacteristics.getAvailablePhysicalCameraRequestKeys()` +* `CameraDevice.createCaptureSession(SessionConfiguration config)` +* `CameraCharactersitics.LOGICAL_MULTI_CAMERA_SENSOR_SYNC_TYPE` +* `OutputConfiguration` & `SessionConfiguration` + +Thanks to changes to the [Android CDD](https://source.android.com/compatibility/android-cdd#7_5_4_camera_api_behavior), the multi-camera API also comes with certain expectations from developers. Devices with dual cameras existed prior to Android Pie, but opening more than one camera simultaneously involved trial and error; multi-camera on Android now gives us a set of rules that tell us when we can open a pair of physical cameras as long as they are part of the same logical camera. + +As stated above, we can expect that, in most cases, new devices launching with Android Pie will expose all physical cameras (the exception being more exotic sensor types such as infrared) along with an easier to use logical camera. Also, and very crucially, we can expect that for every combination of streams that are guaranteed to work, one stream belonging to a logical camera can be replaced by **two** streams from the underlying physical cameras. Let’s cover that in more detail with an example. + +### Multiple streams simultaneously + +In our last blog post, we covered extensively the rules for [using multiple streams simultaneously](https://medium.com/androiddevelopers/using-multiple-camera-streams-simultaneously-bf9488a29482) in a single camera. The exact same rules apply for multiple cameras with a notable addition explained in [the documentation](https://developer.android.com/reference/android/hardware/camera2/CameraMetadata#REQUEST_AVAILABLE_CAPABILITIES_LOGICAL_MULTI_CAMERA): + +> For each guaranteed stream combination, the logical camera supports replacing one logical [YUV_420_888](https://developer.android.com/reference/android/graphics/ImageFormat.html#YUV_420_888) or raw stream with two physical streams of the same size and format, each from a separate physical camera, given that the size and format are supported by both physical cameras. + +In other words, each stream of type YUV or RAW can be replaced with _two_ streams of identical type and size. So, for example, we could start with a camera stream of the following guaranteed configuration for single-camera devices: + +* Stream 1: YUV type, MAXIMUM size from logical camera `id = 0` + +Then, a device with multi-camera support will allow us to create a session replacing that logical YUV stream with two physical streams: + +* Stream 1: YUV type, MAXIMUM size from physical camera `id = 1` +* Stream 2: YUV type, MAXIMUM size from physical camera `id = 2` + +The trick is that we can replace a YUV or RAW stream with two equivalent streams if and only if those two cameras are part of a logical camera grouping — i.e. listed under [CameraCharacteristics.getPhysicalCameraIds()](https://developer.android.com/reference/android/hardware/camera2/CameraCharacteristics#getPhysicalCameraIds%28%29). + +Another thing to consider is that the guarantees provided by the framework are just the bare minimum required to get frames from more than one physical camera simultaneously. We can expect for additional streams to be supported in most devices, sometimes even letting us open multiple physical camera devices independently. Unfortunately, since it’s not a hard guarantee from the framework, doing that will require us to perform per-device testing and tuning via trial and error. + +### Creating a session with multiple physical cameras + +When we interact with physical cameras in a multi-camera enabled device, we should open a single [CameraDevice](https://developer.android.com/reference/android/hardware/camera2/CameraDevice) (the logical camera) and interact with it within a single session, which must be created using the API [CameraDevice.createCaptureSession(SessionConfiguration config)](https://developer.android.com/reference/android/hardware/camera2/CameraDevice#createCaptureSession%28android.hardware.camera2.params.SessionConfiguration%29) available since SDK level 28. Then, the [session configuration](https://developer.android.com/reference/android/hardware/camera2/params/SessionConfiguration) will have a number of [output configurations](https://developer.android.com/reference/android/hardware/camera2/params/OutputConfiguration), each of which will have a set of output targets and, optionally, a desired physical camera ID. + +![](https://cdn-images-1.medium.com/max/800/0*OY88erAolXSr5bA9) + +SessionConfiguration and OutputConfiguration model + +Later, when we dispatch a capture request, said request will have an output target associated with it. The framework will determine which physical (or logical) camera the request will be sent to based on what output target is attached to the request. If the output target corresponds to one of the output targets that was sent as an [output configuration](https://developer.android.com/reference/android/hardware/camera2/params/OutputConfiguration) along with a physical camera ID, then that physical camera will receive and process the request. + +### Using a pair of physical cameras + +One of the most important developer-facing additions to the camera APIs for multi-camera is the ability to identify logical cameras and finding the physical cameras behind them. Now that we understand that we can open physical cameras simultaneously (again, by opening the logical camera and as part of the same session) and the rules for combining streams are clear, we can define a function to help us identify potential pairs of physical cameras that can be used to replace one of the logical camera streams: + +``` +/** +* Helper class used to encapsulate a logical camera and two underlying +* physical cameras +*/ +data class DualCamera(val logicalId: String, val physicalId1: String, val physicalId2: String) + +fun findDualCameras(manager: CameraManager, facing: Int? = null): Array { + val dualCameras = ArrayList() + + // Iterate over all the available camera characteristics + manager.cameraIdList.map { + Pair(manager.getCameraCharacteristics(it), it) + }.filter { + // Filter by cameras facing the requested direction + facing == null || it.first.get(CameraCharacteristics.LENS_FACING) == facing + }.filter { + // Filter by logical cameras + it.first.get(CameraCharacteristics.REQUEST_AVAILABLE_CAPABILITIES)!!.contains( + CameraCharacteristics.REQUEST_AVAILABLE_CAPABILITIES_LOGICAL_MULTI_CAMERA) + }.forEach { + // All possible pairs from the list of physical cameras are valid results + // NOTE: There could be N physical cameras as part of a logical camera grouping + val physicalCameras = it.first.physicalCameraIds.toTypedArray() + for (idx1 in 0 until physicalCameras.size) { + for (idx2 in (idx1 + 1) until physicalCameras.size) { + dualCameras.add(DualCamera( + it.second, physicalCameras[idx1], physicalCameras[idx2])) + } + } + } + + return dualCameras.toTypedArray() +} +``` + +State handling of the physical cameras is controlled by the logical camera. So, to open our “dual camera” we just need to open the logical camera corresponding to the physical cameras that we are interested in: + +``` +fun openDualCamera(cameraManager: CameraManager, + dualCamera: DualCamera, + executor: Executor = AsyncTask.SERIAL_EXECUTOR, + callback: (CameraDevice) -> Unit) { + + cameraManager.openCamera( + dualCamera.logicalId, executor, object : CameraDevice.StateCallback() { + override fun onOpened(device: CameraDevice) = callback(device) + // Omitting for brevity... + override fun onError(device: CameraDevice, error: Int) = onDisconnected(device) + override fun onDisconnected(device: CameraDevice) = device.close() + }) +} +``` + +Up until this point, besides selecting which camera to open, nothing is different compared to what we have been doing to open any other camera in the past. Now it’s time to create a capture session using the new [session configuration](https://developer.android.com/reference/android/hardware/camera2/params/SessionConfiguration) API so we can tell the framework to associate certain targets with specific physical camera IDs: + +``` +/** + * Helper type definition that encapsulates 3 sets of output targets: + * + * 1. Logical camera + * 2. First physical camera + * 3. Second physical camera + */ +typealias DualCameraOutputs = + Triple?, MutableList?, MutableList?> + +fun createDualCameraSession(cameraManager: CameraManager, + dualCamera: DualCamera, + targets: DualCameraOutputs, + executor: Executor = AsyncTask.SERIAL_EXECUTOR, + callback: (CameraCaptureSession) -> Unit) { + + // Create 3 sets of output configurations: one for the logical camera, and + // one for each of the physical cameras. + val outputConfigsLogical = targets.first?.map { OutputConfiguration(it) } + val outputConfigsPhysical1 = targets.second?.map { + OutputConfiguration(it).apply { setPhysicalCameraId(dualCamera.physicalId1) } } + val outputConfigsPhysical2 = targets.third?.map { + OutputConfiguration(it).apply { setPhysicalCameraId(dualCamera.physicalId2) } } + + // Put all the output configurations into a single flat array + val outputConfigsAll = arrayOf( + outputConfigsLogical, outputConfigsPhysical1, outputConfigsPhysical2) + .filterNotNull().flatMap { it } + + // Instantiate a session configuration that can be used to create a session + val sessionConfiguration = SessionConfiguration(SessionConfiguration.SESSION_REGULAR, + outputConfigsAll, executor, object : CameraCaptureSession.StateCallback() { + override fun onConfigured(session: CameraCaptureSession) = callback(session) + // Omitting for brevity... + override fun onConfigureFailed(session: CameraCaptureSession) = session.device.close() + }) + + // Open the logical camera using our previously defined function + openDualCamera(cameraManager, dualCamera, executor = executor) { + + // Finally create the session and return via callback + it.createCaptureSession(sessionConfiguration) + } +} +``` + +At this point, we can refer back to the [documentation](https://developer.android.com/reference/android/hardware/camera2/CameraDevice.html#createCaptureSession%28android.hardware.camera2.params.SessionConfiguration%29) or our [previous blog post](https://medium.com/androiddevelopers/using-multiple-camera-streams-simultaneously-bf9488a29482) to understand which combinations of streams are supported. We just need to remember that those are for multiple streams on a single logical camera, and that the compatibility extends to using the same configuration and replacing one of those streams with two streams from two physical cameras that are part of the same logical camera. + +With the [camera session](https://developer.android.com/reference/android/hardware/camera2/CameraCaptureSession) ready, all that is left to do is dispatching our desired [capture requests](https://developer.android.com/reference/android/hardware/camera2/CaptureRequest). Each target of the capture request will receive its data from its associated physical camera, if any, or fall back to the logical camera. + +### Zoom example use-case + +To tie all of that back to one of the initially discussed use-cases, let’s see how we could implement a feature in our camera app so that users can switch between the different physical cameras to experience a different field-of-view — effectively capturing a different “zoom level”. + +![](https://cdn-images-1.medium.com/max/800/0*WaZN9bicOXI4mpUp) + +Example of swapping cameras for zoom level use-case (from [Pixel 3 Ad](https://www.youtube.com/watch?v=gJtJFEH1Cis)) + +First, we must select the pair of physical cameras that we want to allow users to switch between. For maximum effect, we can search for the pair of cameras that provide the minimum and maximum focal length available, respectively. That way, we select one camera device able to focus on the shortest possible distance and another that can focus at the furthest possible point: + +``` +fun findShortLongCameraPair(manager: CameraManager, facing: Int? = null): DualCamera? { + + return findDualCameras(manager, facing).map { + val characteristics1 = manager.getCameraCharacteristics(it.physicalId1) + val characteristics2 = manager.getCameraCharacteristics(it.physicalId2) + + // Query the focal lengths advertised by each physical camera + val focalLengths1 = characteristics1.get( + CameraCharacteristics.LENS_INFO_AVAILABLE_FOCAL_LENGTHS) ?: floatArrayOf(0F) + val focalLengths2 = characteristics2.get( + CameraCharacteristics.LENS_INFO_AVAILABLE_FOCAL_LENGTHS) ?: floatArrayOf(0F) + + // Compute the largest difference between min and max focal lengths between cameras + val focalLengthsDiff1 = focalLengths2.max()!! - focalLengths1.min()!! + val focalLengthsDiff2 = focalLengths1.max()!! - focalLengths2.min()!! + + // Return the pair of camera IDs and the difference between min and max focal lengths + if (focalLengthsDiff1 < focalLengthsDiff2) { + Pair(DualCamera(it.logicalId, it.physicalId1, it.physicalId2), focalLengthsDiff1) + } else { + Pair(DualCamera(it.logicalId, it.physicalId2, it.physicalId1), focalLengthsDiff2) + } + + // Return only the pair with the largest difference, or null if no pairs are found + }.sortedBy { it.second }.reversed().lastOrNull()?.first +} +``` + +A sensible architecture for this would be to have two [SurfaceViews](https://developer.android.com/reference/android/view/SurfaceView), one for each stream, that get swapped upon user interaction so that only one is visible at any given time. In the following code snippet, we demonstrate how to open the logical camera, configure the camera outputs, create a camera session and start two preview streams; leveraging the functions defined previously: + +``` +val cameraManager: CameraManager = ... + +// Get the two output targets from the activity / fragment +val surface1 = ... // from SurfaceView +val surface2 = ... // from SurfaceView + +val dualCamera = findShortLongCameraPair(manager)!! +val outputTargets = DualCameraOutputs( + null, mutableListOf(surface1), mutableListOf(surface2)) + +// Here we open the logical camera, configure the outputs and create a session +createDualCameraSession(manager, dualCamera, targets = outputTargets) { session -> + + // Create a single request which will have one target for each physical camera + // NOTE: Each target will only receive frames from its associated physical camera + val requestTemplate = CameraDevice.TEMPLATE_PREVIEW + val captureRequest = session.device.createCaptureRequest(requestTemplate).apply { + arrayOf(surface1, surface2).forEach { addTarget(it) } + }.build() + + // Set the sticky request for the session and we are done + session.setRepeatingRequest(captureRequest, null, null) +} +``` + +Now all we need to do is provide a UI for the user to switch between the two surfaces, like a button or double-tapping the `SurfaceView`; if we wanted to get fancy we could try performing some form of scene analysis and switch between the two streams automatically. + +### Lens distortion + +All lenses produce a certain amount of distortion. In Android, we can query the distortion created by lenses using [CameraCharacteristics.LENS_DISTORTION](https://developer.android.com/reference/android/hardware/camera2/CameraCharacteristics#LENS_DISTORTION) (which replaces the now-deprecated [CameraCharacteristics.LENS_RADIAL_DISTORTION](https://developer.android.com/reference/android/hardware/camera2/CameraCharacteristics#LENS_RADIAL_DISTORTION)). For logical cameras, it is reasonable to expect that the distortion will be minimal and our application can use the frames more-or-less as they come from the camera. However, for physical cameras, we should expect potentially very different lens configurations — especially on wide lenses. + +Some devices may implement automatic distortion correction via [CaptureRequest.DISTORTION_CORRECTION_MODE](https://developer.android.com/reference/android/hardware/camera2/CaptureRequest#DISTORTION_CORRECTION_MODE). It is good to know that distortion correction defaults to being on for most devices.The documentation has some more detailed information: + +> FAST/HIGH_QUALITY both mean camera device determined distortion correction will be applied. HIGH_QUALITY mode indicates that the camera device will use the highest-quality correction algorithms, even if it slows down capture rate. FAST means the camera device will not slow down capture rate when applying correction. FAST may be the same as OFF if any correction at all would slow down capture rate […] The correction only applies to processed outputs such as YUV, JPEG, or DEPTH16 […] This control will be on by default on devices that support this control. + +If we wanted to take a still shot from a physical using the highest possible quality, then we should try to set correction mode to HIGH_QUALITY if it’s available. Here’s how we should be setting up our capture request: + +``` +val cameraSession: CameraCaptureSession = ... + +// Use still capture template to build our capture request +val captureRequest = cameraSession.device.createCaptureRequest( + CameraDevice.TEMPLATE_STILL_CAPTURE) + +// Determine if this device supports distortion correction +val characteristics: CameraCharacteristics = ... +val supportsDistortionCorrection = characteristics.get( + CameraCharacteristics.DISTORTION_CORRECTION_AVAILABLE_MODES)?.contains( + CameraMetadata.DISTORTION_CORRECTION_MODE_HIGH_QUALITY) ?: false + +if (supportsDistortionCorrection) { + captureRequest.set( + CaptureRequest.DISTORTION_CORRECTION_MODE, + CameraMetadata.DISTORTION_CORRECTION_MODE_HIGH_QUALITY) +} + +// Add output target, set other capture request parameters... + +// Dispatch the capture request +cameraSession.capture(captureRequest.build(), ...) +``` + +Keep in mind that setting a capture request in this mode will have a potential impact on the frame rate that can be produced by the camera, which is why we are only setting the distortion correction in still image captures. + +### To be continued + +Phew! We covered a bunch of things related to the new multi-camera APIs: + +* Potential use-cases +* Logical vs physical cameras +* Overview of the multi-camera API +* Extended rules for opening multiple camera streams +* How to setup camera streams for a pair of physical cameras +* Example “zoom” use-case swapping cameras +* Correcting lens distortion + +Note that we have not covered frame synchronization and computing depth maps. That is a topic worthy of its own blog post 🙂 + +> 如果发现译文存在错误或其他需要改进的地方,欢迎到 [掘金翻译计划](https://github.com/xitu/gold-miner) 对译文进行修改并 PR,也可获得相应奖励积分。文章开头的 **本文永久链接** 即为本文在 GitHub 上的 MarkDown 链接。 + + +--- + +> [掘金翻译计划](https://github.com/xitu/gold-miner) 是一个翻译优质互联网技术文章的社区,文章来源为 [掘金](https://juejin.im) 上的英文分享文章。内容覆盖 [Android](https://github.com/xitu/gold-miner#android)、[iOS](https://github.com/xitu/gold-miner#ios)、[前端](https://github.com/xitu/gold-miner#前端)、[后端](https://github.com/xitu/gold-miner#后端)、[区块链](https://github.com/xitu/gold-miner#区块链)、[产品](https://github.com/xitu/gold-miner#产品)、[设计](https://github.com/xitu/gold-miner#设计)、[人工智能](https://github.com/xitu/gold-miner#人工智能)等领域,想要查看更多优质译文请持续关注 [掘金翻译计划](https://github.com/xitu/gold-miner)、[官方微博](http://weibo.com/juejinfanyi)、[知乎专栏](https://zhuanlan.zhihu.com/juejinfanyi)。 From 551c9ad69ce211902193e3861639a8a55bd9d9bb Mon Sep 17 00:00:00 2001 From: LeviDing Date: Wed, 2 Jan 2019 10:32:29 +0800 Subject: [PATCH 07/54] Create reducing-dimensionality-from-dimensionality-reduction-techniques.md --- ...rom-dimensionality-reduction-techniques.md | 372 ++++++++++++++++++ 1 file changed, 372 insertions(+) create mode 100644 TODO1/reducing-dimensionality-from-dimensionality-reduction-techniques.md diff --git a/TODO1/reducing-dimensionality-from-dimensionality-reduction-techniques.md b/TODO1/reducing-dimensionality-from-dimensionality-reduction-techniques.md new file mode 100644 index 00000000000..4d9df963612 --- /dev/null +++ b/TODO1/reducing-dimensionality-from-dimensionality-reduction-techniques.md @@ -0,0 +1,372 @@ +> * 原文地址:[Reducing Dimensionality from Dimensionality Reduction Techniques](https://towardsdatascience.com/reducing-dimensionality-from-dimensionality-reduction-techniques-f658aec24dfe) +> * 原文作者:[Elior Cohen](https://towardsdatascience.com/@eliorcohen?source=post_header_lockup) +> * 译文出自:[掘金翻译计划](https://github.com/xitu/gold-miner) +> * 本文永久链接:[https://github.com/xitu/gold-miner/blob/master/TODO1/reducing-dimensionality-from-dimensionality-reduction-techniques.md](https://github.com/xitu/gold-miner/blob/master/TODO1/reducing-dimensionality-from-dimensionality-reduction-techniques.md) +> * 译者: +> * 校对者: + +# Reducing Dimensionality from Dimensionality Reduction Techniques + +In this post I will do my best to demystify three dimensionality reduction techniques; PCA, t-SNE and Auto Encoders. My main motivation for doing so is that mostly these methods are treated as black boxes and therefore sometime are misused. Understanding them will give the reader the tools to decide which one to use, when and how. + +I’ll do so by going over the internals of each methods and code from scratch each method (excluding t-SNE) using TensorFlow. Why TensorFlow? Because it’s mostly used for deep learning, lets give it some other challenges :) +Code for this post can be found in [this notebook](https://github.com/eliorc/Medium/blob/master/PCA-tSNE-AE.ipynb). + +* * * + +### Motivation + +When dealing with real problems and real data we often deal with high dimensional data that can go up to millions. + +While in its original high dimensional structure the data represents itself best sometimes we might need to reduce its dimensionality. +The need to reduce dimensionality is often associated with visualizations (reducing to 2–3 dimensions so we can plot it) but that is not always the case. + +Sometimes we might value performance over precision so we could reduce 1,000 dimensional data to 10 dimensions so we can manipulate it faster (eg. calculate distances). + +The need to reduce dimensionality at times is real and has many applications. + +Before we start, if you had to choose a dimensionality reduction technique for the following cases, which would you choose? + +1. Your system measures distance using the cosine similarity, but you need to visualize it to some non-technical board members which are probably not familiar with cosine similarity at all — how would you do that? + +2. You have the need to compress the data to as little dimensions as you can and the constraint you were given is to preserve approx. 80% of the data, how would you go about that? + +3. You have a database of some kind of data that has been collected through a lot of time, and data (of similar type) keeps coming in from time to time. + +You need to reduce the data you have and any new data as it comes, which method would you choose? + +My hope in this post to help you understand dimensionality reduction better, so you would feel comfortable with questions similar to those. + +Lets start with PCA. + +* * * + +### **PCA** + +PCA (**P**rincipal **C**omponent **A**nalysis) is probably the oldest trick in the book. + +PCA is well studied and there are numerous ways to get to the same solution, we will talk about two of them here, Eigen decomposition and Singular Value Decomposition (SVD) and then we will implement the SVD way in TensorFlow. + +From now on, X will be our data matrix, of shape (n, p) where n is the number of examples, and p are the dimensions. + +So given X, both methods will try to find, in their own way, a way to manipulate and decompose X in a manner that later on we could multiply the decomposed results to represent maximum information in less dimensions. I know I know, sounds horrible but I will spare you most of the math but keep the parts that contribute to the understanding of the method pros and cons. + +So Eigen decomposition and SVD are both ways to decompose matrices, lets see how they help us in PCA and how they are connected. + +Take a glance at the flow chart below and I will explain right after. + +![](https://cdn-images-1.medium.com/max/800/1*xnomew0zpnxftxutG8xoFw.png) + +Figure 1 PCA workflow + +So why should you care about this? Well there is something very fundamental about the two procedures that tells us a lot about PCA. + +As you can see both methods are pure linear algebra, that basically tells us that using PCA is looking at the real data, from a different angle — this is unique to PCA since the other methods start with random representation of lower dimensional data and try to get it to behave like the high dimensional data. + +Some other notable things are that all operations are linear and with SVD are super-super fast. + +Also given the same data PCA will always give the same answer (which is not true about the other two methods). + +Notice how in SVD we choose the r (r is the number of dimensions we want to reduce to) left most values of Σ to lower dimensionality? +Well there is something special about Σ. + +Σ is a diagonal matrix, there are p (number of dimensions) diagonal values (called singular values) and their magnitude indicates how significant they are to preserving the information. + +So we can choose to reduce dimensionality, to the number of dimensions that will preserve approx. given amount of percentage of the data and I will demonstrate that in the code (e.g. gives us the ability to reduce dimensionality with a constraint of losing a max of 15% of the data). + +As you will see, coding this in TensorFlow is pretty simple — what we are are going to code is a class that has `fit` method and a `reduce` method which we will supply the dimensions to. + +### CODE (PCA) + +Lets see how the `fit` method looks like, given `self.X` contains the data and `self.dtype=tf.float32` + +``` +def fit(self): + self.graph = tf.Graph() + with self.graph.as_default(): + self.X = tf.placeholder(self.dtype, shape=self.data.shape) + + # Perform SVD + singular_values, u, _ = tf.svd(self.X) + + # Create sigma matrix + sigma = tf.diag(singular_values) + + with tf.Session(graph=self.graph) as session: + self.u, self.singular_values, self.sigma = session.run([u, singular_values, sigma], + feed_dict={self.X: self.data}) +``` + +So the goal of `fit` is to create our Σ and U for later use. +We’ll start with the line `tf.svd` which gives us the singular values, which are the diagonal values of what was denoted as Σ in Figure 1, and the matrices U and V. + +Then `tf.diag` is TensorFlow’s way of converting a 1D vector, to a diagonal matrix, which in our case will result in Σ. + +At the end of the `fit` call we will have the singular values, Σ and U. + +Now lets lets implement `reduce`. + +``` +def reduce(self, n_dimensions=None, keep_info=None): + if keep_info: + # Normalize singular values + normalized_singular_values = self.singular_values / sum(self.singular_values) + + # Create the aggregated ladder of kept information per dimension + ladder = np.cumsum(normalized_singular_values) + + # Get the first index which is above the given information threshold + index = next(idx for idx, value in enumerate(ladder) if value >= keep_info) + 1 + n_dimensions = index + + with self.graph.as_default(): + # Cut out the relevant part from sigma + sigma = tf.slice(self.sigma, [0, 0], [self.data.shape[1], n_dimensions]) + + # PCA + pca = tf.matmul(self.u, sigma) + + with tf.Session(graph=self.graph) as session: + return session.run(pca, feed_dict={self.X: self.data}) +``` + +So as you can see `reduce` gets either `keep_info` or `n_dimensions` (I didn’t implement the input check where **_only one must be supplied_**). +If we supply `n_dimensions` it will simply reduce to that number, but if we supply `keep_info` which should be a float between 0 and 1, we will preserve that much information from the original data (0.9 — preserve 90% of the data). +In the first ‘if’, we normalize and check how many singular values are needed, basically figuring out `n_dimensions` out of `keep_info`. + +In the graph, we just slice the Σ (sigma) matrix for as much data as we need and perform the matrix multiplication. + +So lets try it out on the iris dataset, which is (150, 4) dataset of 3 species of iris flowers. + +``` +from sklearn import datasets +import matplotlib.pyplot as plt +import seaborn as sns + +tf_pca = TF_PCA(iris_dataset.data, iris_dataset.target) +tf_pca.fit() +pca = tf_pca.reduce(keep_info=0.9) # Results in 2 dimensions + +color_mapping = {0: sns.xkcd_rgb['bright purple'], 1: sns.xkcd_rgb['lime'], 2: sns.xkcd_rgb['ochre']} +colors = list(map(lambda x: color_mapping[x], tf_pca.target)) + +plt.scatter(pca[:, 0], pca[:, 1], c=colors) +``` + +![](https://cdn-images-1.medium.com/max/1000/1*-am5UfbZoJkUA4C8z5d0vQ.png) + +Figure 2 Iris dataset PCA 2 dimensional plot + +Not so bad huh? + +* * * + +### t-SNE + +t-SNE is a relatively (to PCA) new method, originated in 2008 ([original paper link](http://www.jmlr.org/papers/volume9/vandermaaten08a/vandermaaten08a.pdf)). + +It is also more complicated to understand than PCA, so bear with me. +Our notation for t-SNE will be as follows, X will be the original data, P will be a matrix that holds affinities (~distances) between points in X in the high (original) dimensional space, and Q will be the matrix that holds affinities between data points the low dimensional space. If we have n data samples, both Q and P will be n by n matrices (distance from any point to any point including itself). + +Now t-SNE has its “special ways” (which we will get to shortly) to measure distances between things, a certain way to measure distance between data points in the high dimensional space, another way for data points in the low dimensional space and a third way for measuring the distance between P and Q. +Taken from the original paper, the similarity between one point x_j to another point x_i is given by “_p_j|i, that x_i would pick x_j as its neighbor if neighbors were picked in proportion to their probability density under a Gaussian centered at x__i”. + +“Whaaat?” don’t worry about it, as I said, t-SNE has its ways of measuring distance so we will take a look at the formulas for measuring distances (affinities) and pick out the insights we need from them to understand t-SNE’s behavior. + +High level speaking, this is how the algorithm works (notice that unlike PCA, it is an iterative algorithm). + +![](https://cdn-images-1.medium.com/max/800/1*XJdz_4UoWgo4L_2c9gCVOg.png) + +Figure 3 t-SNE workflow + +Lets go over this step by step. + +The algorithm accepts two inputs, one is the data itself, and the other is called the perplexity (Perp). + +Perplexity simply put is how you want to balance the focus between local (close points) and global structure of your data in the optimization process— the article suggests to keep this between 5 and 50. + +Higher perplexity means a data point will consider more points as its close neighbors and lower means less. + +Perplexity really affects how your visualizations will come up and be careful with it because it can create misleading phenomenons in the visualized low dimensional data — I strongly suggest reading this great post about [how to use t-SNE properly](http://distill.pub/2016/misread-tsne/) which covers the effects of different perplexities. + +Where does this perplexity comes in place? It is the used to figure out σ_i in equation (1) and since they have a monotonic connection it is found by binary search. + +So σ_i is basically figured out for us differently, using the perplexity we supply to the algorithm. + +Lets see what the equations tells us about t-SNE. + +A thing to know before we explore equations (1) and (2) is that p_ii is set to 0 and so does q_ii (even though the equations will not output zero if we apply them on two similar points, this is just a given). + +So looking at equations (1) and (2) I want you to notice, that if two points are close (in the high dimensional representation) the numerators will yield a value around 1 while if they are very far apart we would get an infinitesimal — this will help us understand the cost function later. + +Already now we can see a couple of things about t-SNE. + +One is that interpreting distance in t-SNE plots can be problematic, because of the way the affinities equations are built. + +This means that distance between clusters and cluster sizes can be misleading and will be affected by the chosen perplexity too (again I will refer you to the great article you can find in the paragraph above to see visualizations of these phenomenons). + +Second thing is notice how in equation (1) we basically compute the euclidean distance between points? There is something very powerful in that, we can switch that distance measure with any distance measure of our liking, cosine distance, Manhattan distance or any kind of measurement you want (as long as it keeps the [space metric](https://en.wikipedia.org/wiki/Metric_space)) and keep the low dimensional affinities the same — this will result in plotting complex distances, in an euclidean way. + +For example, if you are a CTO and you have some data that you measure its distance by the cosine similarity and your CEO want you to present some kind of plot representing the data, I’m not so sure you’ll have the time to explain the board what is cosine similarity and how to interpret clusters, you can simply plot cosine similarity clusters, as euclidean distance clusters using t-SNE — and that’s pretty awesome I’d say. + +In code, you can achieve this in `scikit-learn` by supplying a distance matrix to the `TSNE` method. + +OK so now that we know that p_ij/q_ij value is bigger when x_i and x_j are close, and very small when they are large. + +Lets see how does that affect our cost function (which is called the [Kullback–Leibler divergence](https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence)) by plotting it and examining equation (3) without the summation part. + +![](https://cdn-images-1.medium.com/max/1000/1*9UPHwkkdnZmGuweNgKoE-w.png) + +Figure 4 t-SNE cost function without the summation part + +Its pretty hard to catch, but I did put the axis names there. +So as you can see, the cost function is asymmetric. + +It yields a great cost to points that are nearby in the high dimensional space (p axis) but are represented by far away points in the low dimensional space while a smaller cost for far apart points in the high dimensional space represented by near points in the low dimensional space. + +This indicates even more the problem of distance interpret ability in t-SNE plots. + +Lets t-SNE the iris dataset and see what happens with different perplexities + +``` +model = TSNE(learning_rate=100, n_components=2, random_state=0, perplexity=5) +tsne5 = model.fit_transform(iris_dataset.data) + +model = TSNE(learning_rate=100, n_components=2, random_state=0, perplexity=30) +tsne30 = model.fit_transform(iris_dataset.data) + +model = TSNE(learning_rate=100, n_components=2, random_state=0, perplexity=50) +tsne50 = model.fit_transform(iris_dataset.data) + +plt.figure(1) +plt.subplot(311) +plt.scatter(tsne5[:, 0], tsne5[:, 1], c=colors) + +plt.subplot(312) +plt.scatter(tsne30[:, 0], tsne30[:, 1], c=colors) + +plt.subplot(313) +plt.scatter(tsne50[:, 0], tsne50[:, 1], c=colors) + +plt.show() +``` + +![](https://cdn-images-1.medium.com/max/1000/1*15Rz_rhZ_GipJaE4WSa7bw.png) + +Figure 5 t-SNE on iris dataset, different perplexities + +As we understood from the math, you can see that given a good perplexity the data does cluster, but notice the sensibility to the hyperparameters (I couldn’t find clusters without supplying learning rate to the gradient descent). + +Before we move on I want to say that t-SNE is a very powerful method if you apply it correctly and don’t take what you’ve learned to the negative side, just be aware of how to use it. + +Next are Auto Encoders. + +* * * + +### Auto Encoders + +While PCA and t-SNE are methods, Auto Encoders are a family of methods. +Auto Encoders are neural networks where the network aims to predict the input (the output is trained to be as similar as possible to the input) by using less hidden nodes (on the end of the encoder) than input nodes by encoding as much information as it can to the hidden nodes. + +A basic auto encoder for our 4 dimensional iris dataset would look like Figure 6, where the lines connecting between the input layer to the hidden layer are called the “encoder” and the lines between the hidden layer and the output layer the “decoder”. + +![](https://cdn-images-1.medium.com/max/800/1*cZUlhHVpPzsLv5AwuLhtEg.png) + +Figure 6 Basic auto encoder for the iris dataset + +So why are Auto Encoders are a family? Well because the only constraint we have is that the input and output layer will be of the same dimension, inside we can create any architecture we want to be able to encode best our high dimensional data. + +Auto Encoders starts with some random low dimensional representation (z) and will gradient descent towards their solution by changing the weights that connect the input layer to the hidden layer, and the hidden layer to the output layer. + +By now we can already learn something important about Auto Encoders, because we control the inside of the network, we can engineer encoders that will be able to pick very complex relationships between features. + +Another great plus in Auto Encoders, is that since by the end of the training we have the weights that lead to the hidden layer, we can train on certain input, and if later on we come across another data point we can reduce its dimensionality using those weights without re-training — but be careful with that, this will only work if the data point is somewhat similar to the data we trained on. + +To explore the math of Auto Encoder could be simple in this case but not quite useful, since the math will be different for every architecture and cost function we will choose. + +But if we take a moment and think about the way the weights of the Auto Encoder will be optimized we understand the the cost function we define has a very important role. + +Since the Auto Encoder will use the cost function to determine how good are its predictions we can use that power to emphasize what we want to. +Whether we want the euclidean distance or other measurements, we can reflect them on the encoded data through the cost function, using different distance methods, using asymmetric functions and what not. + +More power lies in the fact that as this is a neural network essentially, we can even weight classes and samples as we train to give more significance to certain phenomenons in the data. + +This gives us great flexibility in the way we compress our data. + +Auto Encoders are very powerful and have shown some great results in comparison to other methods in some cases (just Google “PCA vs Auto Encoders”) so they are definitely a valid approach. + +Lets TensorFlow a basic Auto Encoder for the iris data set and plot it + +### CODE (Auto Encoder) + +Again, we’ll split into `fit` and `reduce` + +``` +def fit(self, n_dimensions): + graph = tf.Graph() + with graph.as_default(): + + # Input variable + X = tf.placeholder(self.dtype, shape=(None, self.features.shape[1])) + + # Network variables + encoder_weights = tf.Variable(tf.random_normal(shape=(self.features.shape[1], n_dimensions))) + encoder_bias = tf.Variable(tf.zeros(shape=[n_dimensions])) + + decoder_weights = tf.Variable(tf.random_normal(shape=(n_dimensions, self.features.shape[1]))) + decoder_bias = tf.Variable(tf.zeros(shape=[self.features.shape[1]])) + + # Encoder part + encoding = tf.nn.sigmoid(tf.add(tf.matmul(X, encoder_weights), encoder_bias)) + + # Decoder part + predicted_x = tf.nn.sigmoid(tf.add(tf.matmul(encoding, decoder_weights), decoder_bias)) + + # Define the cost function and optimizer to minimize squared error + cost = tf.reduce_mean(tf.pow(tf.subtract(predicted_x, X), 2)) + optimizer = tf.train.AdamOptimizer().minimize(cost) + + with tf.Session(graph=graph) as session: + # Initialize global variables + session.run(tf.global_variables_initializer()) + + for batch_x in batch_generator(self.features): + self.encoder['weights'], self.encoder['bias'], _ = session.run([encoder_weights, encoder_bias, optimizer], + feed_dict={X: batch_x}) +``` + +Nothing special in here, code is pretty self explanatory and we save our encoders weights in biases, so we could reduce the data in the `reduce` method which is here next. + +``` +def reduce(self): + return np.add(np.matmul(self.features, self.encoder['weights']), self.encoder['bias']) +``` + +Boom, that simple :) + +Lets see how did it do (batch size 50, 1000 epochs) + +![](https://cdn-images-1.medium.com/max/1000/1*2kgAE0D1NcQsRvt76Og2yw.png) + +Figure 7 Simple Auto Encoder output on iris dataset + +We could continue to play with the batch size, number of epochs and different optimizers even without changing the architecture and we would get varying results — this is what came just off the bat. + +Notice I just chose some arbitrary values for the hyperparameters, in a real scenario we would measure how well we are doing by cross validation or test data and find the best setting. + +### Final Words + +Posts like this usually end with some kind of comparison chart, pros and cons etc. +But that is the exact opposite of what I was trying to achieve. + +My goal was to expose the intimate parts of the methods so the reader would be able to figure out and understand positives and negatives of each one. +I hope you enjoyed the reading and have learnt something new. + +Scroll up to the beginning of the post, to those three questions, feel any more comfortable now with them? + +> 如果发现译文存在错误或其他需要改进的地方,欢迎到 [掘金翻译计划](https://github.com/xitu/gold-miner) 对译文进行修改并 PR,也可获得相应奖励积分。文章开头的 **本文永久链接** 即为本文在 GitHub 上的 MarkDown 链接。 + + +--- + +> [掘金翻译计划](https://github.com/xitu/gold-miner) 是一个翻译优质互联网技术文章的社区,文章来源为 [掘金](https://juejin.im) 上的英文分享文章。内容覆盖 [Android](https://github.com/xitu/gold-miner#android)、[iOS](https://github.com/xitu/gold-miner#ios)、[前端](https://github.com/xitu/gold-miner#前端)、[后端](https://github.com/xitu/gold-miner#后端)、[区块链](https://github.com/xitu/gold-miner#区块链)、[产品](https://github.com/xitu/gold-miner#产品)、[设计](https://github.com/xitu/gold-miner#设计)、[人工智能](https://github.com/xitu/gold-miner#人工智能)等领域,想要查看更多优质译文请持续关注 [掘金翻译计划](https://github.com/xitu/gold-miner)、[官方微博](http://weibo.com/juejinfanyi)、[知乎专栏](https://zhuanlan.zhihu.com/juejinfanyi)。 From cc5f04579b2f1613286ba1a6620dd3f56b14bb28 Mon Sep 17 00:00:00 2001 From: LeviDing Date: Wed, 2 Jan 2019 11:30:48 +0800 Subject: [PATCH 08/54] Update front-end.md --- front-end.md | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/front-end.md b/front-end.md index 9a1dd275c41..126733a7184 100644 --- a/front-end.md +++ b/front-end.md @@ -1,3 +1,10 @@ +* [为什么我放弃了 React 而转向 Vue](https://juejin.im/post/5c2c27096fb9a049f66c3672) ([EmilyQiRabbit](https://github.com/EmilyQiRabbit) 翻译) +* [创建并发布一个小而美的 npm 包](https://juejin.im/post/5c26c1b65188252dcb312ad6) ([calpa](https://github.com/calpa) 翻译) +* [2019 年你应该要知道的 11 个 React UI 组件库](https://juejin.im/post/5c260f13e51d45473a5c07a4) ([ElizurHz](https://github.com/ElizurHz) 翻译) +* [5 款工具助力 React 快速开发](https://juejin.im/post/5c242e3f51882573d90678ad) ([Ivocin](https://github.com/Ivocin) 翻译) +* [React 路由和 React 组件的爱恨情仇](https://juejin.im/post/5c2217abe51d4570f1453cad) ([Augustwuli](https://github.com/Augustwuli) 翻译) +* [误解 ES6 模块,升级 Babel 的一个解决方案(泪奔)](https://juejin.im/post/5c223f4ce51d452626296b5d) ([Starriers](https://github.com/Starriers) 翻译) +* [继承 JavaScript 类中的静态属性](https://juejin.im/post/5c2217fc6fb9a049b348039d) ([Augustwuli](https://github.com/Augustwuli) 翻译) * [用 Flask 和 Vue.js 开发一个单页面应用](https://juejin.im/post/5c1f7289f265da612e28a214) ([Mcskiller](https://github.com/Mcskiller) 翻译) * [用 React 和 Node.js 实现受保护的路由和权限验证](https://juejin.im/post/5c1cdaaa6fb9a049aa6f0f8b) ([ElizurHz](https://github.com/ElizurHz) 翻译) * [理解 React Render Props 和 HOC](https://juejin.im/post/5c1f8ded6fb9a049b506ce94) ([wuzhengyan2015](https://github.com/wuzhengyan2015) 翻译) From a539011ec16b90a118e045c822190f2e9c7bb2a6 Mon Sep 17 00:00:00 2001 From: LeviDing Date: Wed, 2 Jan 2019 11:31:14 +0800 Subject: [PATCH 09/54] =?UTF-8?q?=E6=9B=B4=E6=96=B0=2012=20=E6=9C=88?= =?UTF-8?q?=E4=BB=BD=E5=89=8D=E7=AB=AF=E5=88=86=E7=B1=BB=E6=96=87=E7=AB=A0?= =?UTF-8?q?=E7=BF=BB=E8=AF=91=E6=A0=A1=E5=AF=B9=E7=A7=AF=E5=88=86?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- integrals.md | 56 ++++++++++++++++++++++++++++++++++++++++------------ 1 file changed, 43 insertions(+), 13 deletions(-) diff --git a/integrals.md b/integrals.md index 9ec39e40a61..674b6c748d9 100644 --- a/integrals.md +++ b/integrals.md @@ -3806,10 +3806,11 @@ |[AI 能解决你的 UX 设计问题吗?](https://juejin.im/post/5992aa306fb9a03c445df727)|校对|1| |[REST API 已死,GraphQL 长存](https://juejin.im/post/5991667b518825485d28dfb1)|校对|2| -## 译者:[calpa](https://github.com/calpa) 历史贡献积分:30 当前积分:30 年度积分:25 +## 译者:[calpa](https://github.com/calpa) 历史贡献积分:34.5 当前积分:34.5 年度积分:29.5 |文章|类型|积分| |------|-------|-------| +|[创建并发布一个小而美的 npm 包](https://juejin.im/post/5c26c1b65188252dcb312ad6)|翻译|4.5| |推荐优秀英文文章两篇|奖励|2| |[Rust 开发完整的 Web 应用程序](https://juejin.im/post/5bd66dee6fb9a05cdb1081ca)|校对|2| |[设计师的决策树](https://juejin.im/post/5befd61ee51d4557fe34e944)|校对|1| @@ -4370,10 +4371,11 @@ |[JavaScript 如何工作:在 V8 引擎里 5 个优化代码的技巧](https://juejin.im/post/5a102e656fb9a044fd1158c6)|校对|2| |[Vue Report 2017](https://juejin.im/post/5a138fae5188254d28732899)|翻译|4| -## 译者:[caoyi0905](https://github.com/caoyi0905) 历史贡献积分:32 当前积分:17 年度积分:6.5 +## 译者:[caoyi0905](https://github.com/caoyi0905) 历史贡献积分:33 当前积分:18 年度积分:7.5 |文章|类型|积分| |------|-------|-------| +|[误解 ES6 模块,升级 Babel 的一个解决方案(泪奔)](https://juejin.im/post/5c223f4ce51d452626296b5d)|校对|1| |[被污染的 npm 包:event-stream](https://juejin.im/post/5c1b02dcf265da6166246c25)|校对|1.5| |2018 年 12 月兑 GitHub 贴纸 1 包|减去积分|5| |[The JavaScript Tutorial 翻译](https://github.com/xitu/javascript-tutorial-en)|翻译校对|2| @@ -5414,10 +5416,11 @@ |[使用 MVI 开发响应式 APP - 第三部分 - 状态减少(state reducer)](https://juejin.im/post/5a955c50f265da4e853d856a)|翻译|4| |[二十年后比特币会变成什么样?- 第二部分](https://juejin.im/post/5a955721f265da4e826377b6)|翻译|6| -## 译者:[Starriers](https://github.com/Starriers) 历史贡献积分:426 当前积分:396 年度积分:426 +## 译者:[Starriers](https://github.com/Starriers) 历史贡献积分:429.5 当前积分:399.5 年度积分:429.5 |文章|类型|积分| |------|-------|-------| +|[误解 ES6 模块,升级 Babel 的一个解决方案(泪奔)](https://juejin.im/post/5c223f4ce51d452626296b5d)|翻译|3.5| |[通过集成学习提高机器学习效果](https://juejin.im/post/5c0909d951882548e93806e0)|翻译|5| |[如何使用 Dask Dataframes 在 Python 中运行并行数据分析](https://juejin.im/post/5c1feeaf5188257f9242b65c)|翻译|4| |[理解编译器 — 从人类的角度(版本 2)](https://juejin.im/post/5c10b2f6e51d452ad958631f)|翻译|5| @@ -5750,10 +5753,11 @@ |[让 Apache Cassandra 尾部延迟减小 10 倍(已开源)](https://juejin.im/post/5ac31083f265da239a5fff0c)|翻译|4| |[让我们来简化 UserDefaults 的使用](https://juejin.im/post/5abde324f265da23826e1723)|校对|0.5| -## 译者:[EmilyQiRabbit](https://github.com/EmilyQiRabbit) 历史贡献积分:67 当前积分:47 年度积分:67 +## 译者:[EmilyQiRabbit](https://github.com/EmilyQiRabbit) 历史贡献积分:73 当前积分:53 年度积分:73 |文章|类型|积分| |------|-------|-------| +|[为什么我放弃了 React 而转向 Vue](https://juejin.im/post/5c2c27096fb9a049f66c3672)|翻译|6| |[使用 GRAPHQL 构建项目的回顾](https://juejin.im/post/5c18ba5bf265da61715e44ed)|翻译|4| |[Medium 的 GraphQL 服务设计](https://juejin.im/post/5c00dad3f265da617006db4e)|翻译|3| |2018 年 12 月兑掘金鼠标垫和 GitHub 贴纸各 1 份|减去积分|20| @@ -5845,10 +5849,11 @@ |[使用 Swift 实现原型动画](https://juejin.im/post/5ae28a9b6fb9a07aaa10fa1e)|校对|2| |[不使用 fastlane 实现持续交付的 5 种选项](https://juejin.im/post/5acf47cb6fb9a028c523944c)|翻译|5| -## 译者:[luochen1992](https://github.com/luochen1992) 历史贡献积分:55 当前积分:10 年度积分:55 +## 译者:[luochen1992](https://github.com/luochen1992) 历史贡献积分:57 当前积分:12 年度积分:57 |文章|类型|积分| |------|-------|-------| +|[为什么我放弃了 React 而转向 Vue](https://juejin.im/post/5c2c27096fb9a049f66c3672)|校对|2| |[TensorFlow 官方文档翻译](https://github.com/xitu/tensorflow-docs)|翻译校对|1| |[如何通过树莓派的深度学习轻松检测对象](https://juejin.im/post/5b1ba938518825137661af46)|校对|2| |[使用 Span 来修改文本样式的优质体验](https://juejin.im/post/5b24c20851882574ea3a0d86)|校对|1| @@ -6600,10 +6605,12 @@ |[The JavaScript Tutorial 翻译](https://github.com/xitu/javascript-tutorial-en)|翻译校对|3| |[The JavaScript Tutorial 翻译](https://github.com/xitu/javascript-tutorial-en)|翻译校对|1| -## 译者:[Moonliujk](https://github.com/Moonliujk) 历史贡献积分:62 当前积分:7 年度积分:62 +## 译者:[Moonliujk](https://github.com/Moonliujk) 历史贡献积分:65 当前积分:10 年度积分:65 |文章|类型|积分| |------|-------|-------| +|[5 款工具助力 React 快速开发](https://juejin.im/post/5c242e3f51882573d90678ad)|校对|1.5| +|[为什么我放弃了 React 而转向 Vue](https://juejin.im/post/5c2c27096fb9a049f66c3672)|校对|1.5| |[怎么做:React Native 网页应用,一场开心的挣扎](https://juejin.im/post/5c13219d6fb9a049e82b65c3)|校对|2| |[理解 React Render Props 和 HOC](https://juejin.im/post/5c1f8ded6fb9a049b506ce94)|校对|1| |2018 年 12 月兑树莓派套餐 1 个|减去积分|55| @@ -6671,10 +6678,11 @@ |[在 Sketch 中使用一个设计体系创作: 第二部分 [教程]](https://juejin.im/post/5b5d2a456fb9a04fc80b8f4b)|翻译|2.5| |[在 Sketch 中使用一个设计体系创作:第一部分 [教程]](https://juejin.im/post/5b591a655188257bca290b24)|校对|0.5| -## 译者:[Park-ma](https://github.com/Park-ma) 历史贡献积分:42 当前积分:42 年度积分:42 +## 译者:[Park-ma](https://github.com/Park-ma) 历史贡献积分:43.5 当前积分:43.5 年度积分:43.5 |文章|类型|积分| |------|-------|-------| +|[创建并发布一个小而美的 npm 包](https://juejin.im/post/5c26c1b65188252dcb312ad6)|校对|1.5| |[iOS 12 占有率超过 50%,超过了 iOS 11](https://juejin.im/post/5bf64ad851882579117f74ae)|校对|0.5| |[TensorFlow 官方文档翻译](https://github.com/xitu/tensorflow-docs)|翻译校对|1.5| |[用 Flask 输出视频流](https://juejin.im/post/5bea86fc518825158c531e9c)|校对|2| @@ -7120,10 +7128,12 @@ |------|-------|-------| |推荐优秀英文文章|奖励|1| -## 译者:[Augustwuli](https://github.com/Augustwuli) 历史贡献积分:24 当前积分:24 年度积分:24 +## 译者:[Augustwuli](https://github.com/Augustwuli) 历史贡献积分:28.5 当前积分:28.5 年度积分:28.5 |文章|类型|积分| |------|-------|-------| +|[继承 JavaScript 类中的静态属性](https://juejin.im/post/5c2217fc6fb9a049b348039d)|翻译|2| +|[React 路由和 React 组件的爱恨情仇](https://juejin.im/post/5c2217abe51d4570f1453cad)|翻译|2.5| |[如何停止使用 console.log() 并开始使用浏览器调试代码](https://juejin.im/post/5bd7cde4f265da0a96251de3)|翻译|5.5| |[作为自由开发者,7 个步骤让你获得更多的客户](https://juejin.im/post/5bd660c26fb9a05ce576e9b7)|校对|1.5| |[6 个最令人满意的和编程相关的工作(和参与这些工作的人们的类型)](https://juejin.im/post/5be271f0e51d450556196864)|翻译|5| @@ -7135,10 +7145,11 @@ |[以面试官的角度来看 React 工作面试](https://juejin.im/post/5bca74cfe51d450e9163351b)|校对|1.5| |[你需要知道的所有 Flexbox 排列方式](https://juejin.im/post/5bc728f2f265da0aef4e3f6d)|校对|3.5| -## 译者:[Ivocin](https://github.com/Ivocin) 历史贡献积分:32.5 当前积分:32.5 年度积分:32.5 +## 译者:[Ivocin](https://github.com/Ivocin) 历史贡献积分:37 当前积分:37 年度积分:37 |文章|类型|积分| |------|-------|-------| +|[5 款工具助力 React 快速开发](https://juejin.im/post/5c242e3f51882573d90678ad)|翻译|4.5| |[理解 React Render Props 和 HOC](https://juejin.im/post/5c1f8ded6fb9a049b506ce94)|校对|1.5| |[写给 React 开发者的自定义元素指南](https://juejin.im/post/5c0873a8e51d451de96890dc)|校对|3| |[Google 工程师提升网页性能的新策略:空闲到紧急](https://juejin.im/post/5bdec712e51d4505525b0fba)|翻译|12| @@ -7220,10 +7231,11 @@ |推荐优秀英文文章两篇|奖励|2| |[从现有的代码库创建 Swift 包管理器](https://juejin.im/post/5bec2b735188253b6e5c132a)|翻译|4| -## 译者:[Mcskiller](https://github.com/Mcskiller) 历史贡献积分:15 当前积分:15 年度积分:15 +## 译者:[Mcskiller](https://github.com/Mcskiller) 历史贡献积分:15.5 当前积分:15.5 年度积分:15.5 |文章|类型|积分| |------|-------|-------| +|[继承 JavaScript 类中的静态属性](https://juejin.im/post/5c2217fc6fb9a049b348039d)|校对|0.5| |[程序构建系列教程简介](https://juejin.im/post/5c0dd214518825444758453a)|校对|2.5| |[使用 Capacitor 和 Vue.js 构建移动应用](https://juejin.im/post/5c0f0a9e518825428c5704d8)|校对|1.5| |[用 Flask 和 Vue.js 开发一个单页面应用](https://juejin.im/post/5c1f7289f265da612e28a214)|翻译|5.5| @@ -7314,10 +7326,12 @@ |[三人研发小组的高效研发尝试](https://juejin.im/post/5c19d1846fb9a049f06a33fc)|校对|2| |[你不知道的 console 命令](https://juejin.im/post/5bf64218e51d45194266acb7)|翻译|6| -## 译者:[RicardoCao-Biker ](https://github.com/RicardoCao-Biker ) 历史贡献积分:2 当前积分:2 年度积分:2 +## 译者:[RicardoCao-Biker ](https://github.com/RicardoCao-Biker ) 历史贡献积分:3.5 当前积分:3.5 年度积分:3.5 |文章|类型|积分| |------|-------|-------| +|[继承 JavaScript 类中的静态属性](https://juejin.im/post/5c2217fc6fb9a049b348039d)|校对|0.5| +|[React 路由和 React 组件的爱恨情仇](https://juejin.im/post/5c2217abe51d4570f1453cad)|校对|1| |[你不知道的 console 命令](https://juejin.im/post/5bf64218e51d45194266acb7)|校对|2| ## 译者:[tonghuashuo](https://github.com/tonghuashuo) 历史贡献积分:6 当前积分:6 年度积分:6 @@ -7345,16 +7359,20 @@ |------|-------|-------| |[TensorFlow 官方文档翻译](https://github.com/xitu/tensorflow-docs)|翻译校对|18| -## 译者:[ElizurHz](https://github.com/ElizurHz) 历史贡献积分:4 当前积分:4 年度积分:4 +## 译者:[ElizurHz](https://github.com/ElizurHz) 历史贡献积分:10 当前积分:10 年度积分:10 |文章|类型|积分| |------|-------|-------| +|[5 款工具助力 React 快速开发](https://juejin.im/post/5c242e3f51882573d90678ad)|校对|1.5| +|[2019 年你应该要知道的 11 个 React UI 组件库](https://juejin.im/post/5c260f13e51d45473a5c07a4)|翻译|3| +|[创建并发布一个小而美的 npm 包](https://juejin.im/post/5c26c1b65188252dcb312ad6)|校对|1.5| |[用 React 和 Node.js 实现受保护的路由和权限验证](https://juejin.im/post/5c1cdaaa6fb9a049aa6f0f8b)|翻译|4| -## 译者:[wuzhengyan2015](https://github.com/wuzhengyan2015) 历史贡献积分:5.5 当前积分:5.5 年度积分:5.5 +## 译者:[wuzhengyan2015](https://github.com/wuzhengyan2015) 历史贡献积分:6.5 当前积分:6.5 年度积分:6.5 |文章|类型|积分| |------|-------|-------| +|[2019 年你应该要知道的 11 个 React UI 组件库](https://juejin.im/post/5c260f13e51d45473a5c07a4)|校对|1| |[柯里化与函数组合](https://juejin.im/post/5c1a0d516fb9a049d05daee9)|校对|1.5| |[理解 React Render Props 和 HOC](https://juejin.im/post/5c1f8ded6fb9a049b506ce94)|翻译|4| @@ -7398,3 +7416,15 @@ |文章|类型|积分| |------|-------|-------| |[以太坊入门指南](https://juejin.im/post/5c1080fbe51d452b307969a3)|校对|1.5| + +## 译者:[xiaxiayang](https://github.com/xiaxiayang) 历史贡献积分:1 当前积分:1 年度积分:1 + +|文章|类型|积分| +|------|-------|-------| +|[2019 年你应该要知道的 11 个 React UI 组件库](https://juejin.im/post/5c260f13e51d45473a5c07a4)|校对|1| + +## 译者:[SinanJS](https://github.com/SinanJS) 历史贡献积分:1.5 当前积分:1.5 年度积分:1.5 + +|文章|类型|积分| +|------|-------|-------| +|[误解 ES6 模块,升级 Babel 的一个解决方案(泪奔)](https://juejin.im/post/5c223f4ce51d452626296b5d)|校对|1.5| From d0e0cff9f46ec7bca60de0d043bef05a26264959 Mon Sep 17 00:00:00 2001 From: LeviDing Date: Wed, 2 Jan 2019 13:10:45 +0800 Subject: [PATCH 10/54] Update android.md --- android.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/android.md b/android.md index 9c8ff9c7fe0..f7c4307db54 100644 --- a/android.md +++ b/android.md @@ -1,7 +1,9 @@ +* [Android 中的 MVP:如何使 Presenter 层系统化](https://juejin.im/post/5c203323f265da6110370dec) ([Moosphan](https://github.com/Moosphan) 翻译) +* [MDC-102 Flutter:Material 结构和布局(Flutter)](https://juejin.im/post/5c24504d518825124e2767fc) ([DevMcryYu](https://github.com/DevMcryYu) 翻译) +* [MDC-101 Flutter:Material Components(MDC)基础(Flutter)](https://juejin.im/post/5c1758e6e51d451a77161ab5) ([DevMcryYu](https://github.com/DevMcryYu) 翻译) * [使用自定义文件模板加快你的应用开发速度](https://juejin.im/post/5c204bcdf265da611b585bcd) ([nanjingboy](https://github.com/nanjingboy) 翻译) * [当 Kotlin 中的监听器包含多个方法时,如何让它 “巧夺天工”?](https://juejin.im/post/5c1e43646fb9a04a102f45ab) ([Moosphan](https://github.com/Moosphan) 翻译) * [了解 Android 的矢量图片格式:`VectorDrawable`](https://juejin.im/post/5c1a21ff5188252eb759600e) ([HarderChen](https://github.com/HarderChen) 翻译) -* [MDC-101 Flutter:Material Components(MDC)基础(Flutter)](https://juejin.im/post/5c1758e6e51d451a77161ab5) ([DevMcryYu](https://github.com/DevMcryYu) 翻译) * [Android 内核控制流完整性](https://juejin.im/post/5c1740dcf265da614a3a66c1) ([nanjingboy](https://github.com/nanjingboy) 翻译) * [同时使用多个相机流](https://juejin.im/post/5c1071ece51d4570b57af8c8) ([zx-Zhu](https://github.com/zx-Zhu) 翻译) * [Kotlin 协程高级使用技巧](https://juejin.im/post/5c0f11986fb9a049be5d53eb) ([nanjingboy](https://github.com/nanjingboy) 翻译) From af00f260eb6e2282622df7dda2090c454a7d7906 Mon Sep 17 00:00:00 2001 From: LeviDing Date: Wed, 2 Jan 2019 13:52:08 +0800 Subject: [PATCH 11/54] Update backend.md --- backend.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/backend.md b/backend.md index 1d05048f065..1bde2e77ce1 100644 --- a/backend.md +++ b/backend.md @@ -1,3 +1,5 @@ +* [数据流的不同应用场景 — Java](https://juejin.im/post/5c2c285fe51d4522ec5a2795) ([Starriers](https://github.com/Starriers) 翻译) +* [无容器下的云计算](https://juejin.im/post/5c24800a518825673b02dcfe) ([TrWestdoor](https://github.com/TrWestdoor) 翻译) * [如何在六个月或更短的时间内成为 DevOps 工程师,第四部分:打包](https://juejin.im/post/5c19d6255188252ea66b33b3) ([Raoul1996](https://github.com/Raoul1996) 翻译) * [使用 NodeJS 创建一个 GraphQL 服务器](https://juejin.im/post/5c015a5af265da612577d89a) ([Raoul1996](https://github.com/Raoul1996) 翻译) * [Medium 的 GraphQL 服务设计](https://juejin.im/post/5c00dad3f265da617006db4e) ([EmilyQiRabbit](https://github.com/EmilyQiRabbit) 翻译) From d109b1b743ba6560dba140e8ecc106279ddabb87 Mon Sep 17 00:00:00 2001 From: LeviDing Date: Wed, 2 Jan 2019 13:57:42 +0800 Subject: [PATCH 12/54] Update design.md --- design.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/design.md b/design.md index 4e63a334539..0b50f92ddd1 100644 --- a/design.md +++ b/design.md @@ -1,3 +1,5 @@ +* [一份关于色彩无障碍性产品设计的指南](https://juejin.im/post/5c2c233d6fb9a049bd4266b7) ([Hopsken](https://github.com/Hopsken) 翻译) +* [快速原型设计的新手指南](https://juejin.im/user/585b9407da2f6000657a5c0c) ([rydensun](https://github.com/rydensun) 翻译) * [我是如何在谷歌找到 UX 设计的工作的](https://juejin.im/post/5bea544ff265da6112048e3c) ([rydensun](https://github.com/rydensun) 翻译) * [设计师的决策树](https://juejin.im/post/5befd61ee51d4557fe34e944) ([zhmhhu](https://github.com/zhmhhu) 翻译) * [动效设计可以很简单](https://juejin.im/post/5bd11a176fb9a05d101423c0) ([rydensun](https://github.com/rydensun) 翻译) From 9fae6cbbb50dde9684121ed93ac6c1896c7b0555 Mon Sep 17 00:00:00 2001 From: LeviDing Date: Wed, 2 Jan 2019 14:04:51 +0800 Subject: [PATCH 13/54] Update product.md --- product.md | 1 + 1 file changed, 1 insertion(+) diff --git a/product.md b/product.md index 4d44ccfe3cd..748d5235e71 100644 --- a/product.md +++ b/product.md @@ -1,3 +1,4 @@ +* [产品管理思维模式适合每一个人](https://juejin.im/post/5c2c266ae51d4511fb7db0c7) ([EmilyQiRabbit](https://github.com/EmilyQiRabbit) 翻译) * [苹果公司如何颠覆瑞士制表业](https://juejin.im/post/5bdc1f3c6fb9a049a9792211) ([noturnot](https://github.com/noturnot) 翻译) * [如何让你的设计系统被广泛采用](https://juejin.im/post/5bb6118af265da0af609c581) ([rydensun](https://github.com/rydensun) 翻译) * [如果界面产品设计师设计实体产品](https://juejin.im/post/5baf9697e51d456f087ba2a8) ([ssshooter](https://github.com/ssshooter) 翻译) From f90412b2e1bf0b64288f03c106a27e334b43df8a Mon Sep 17 00:00:00 2001 From: LeviDing Date: Wed, 2 Jan 2019 14:06:47 +0800 Subject: [PATCH 14/54] Update ios.md --- ios.md | 1 + 1 file changed, 1 insertion(+) diff --git a/ios.md b/ios.md index ac53276d590..bf6b75e5f5c 100644 --- a/ios.md +++ b/ios.md @@ -1,3 +1,4 @@ +* [值类型导向编程](https://juejin.im/post/5c2c3f8d518825480635db8b) ([nanjingboy](https://github.com/nanjingboy) 翻译) * [使用 Swift 的 iOS 设计模式(第二部分)](https://juejin.im/post/5c1786576fb9a049f06a2c4a) ([iWeslie](https://github.com/iWeslie) 翻译) * [使用 Swift 的 iOS 设计模式(第一部分)](https://juejin.im/post/5c05d4ee5188250ab14e62d6) ([iWeslie](https://github.com/iWeslie) 翻译) * [使用 Kotlin 将你的 iOS 应用程序转换为 Android](https://juejin.im/post/5c03f64ce51d454af013d076) ([iWeslie](https://github.com/iWeslie) 翻译) From fc213b752113f0694e940be3f9edeb0f7d3db58e Mon Sep 17 00:00:00 2001 From: LeviDing Date: Wed, 2 Jan 2019 14:07:27 +0800 Subject: [PATCH 15/54] =?UTF-8?q?=E6=9B=B4=E6=96=B0=2012=20=E6=9C=88?= =?UTF-8?q?=E4=BB=BD=E5=85=B6=E4=BB=96=E7=9A=84=E5=90=84=E5=88=86=E7=B1=BB?= =?UTF-8?q?=E6=96=87=E7=AB=A0=E7=BF=BB=E8=AF=91=E6=A0=A1=E5=AF=B9=E7=A7=AF?= =?UTF-8?q?=E5=88=86?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- integrals.md | 51 +++++++++++++++++++++++++++++++++++++-------------- 1 file changed, 37 insertions(+), 14 deletions(-) diff --git a/integrals.md b/integrals.md index 674b6c748d9..313831177ac 100644 --- a/integrals.md +++ b/integrals.md @@ -862,10 +862,11 @@ |[JavaScript 姿势提升简略](http://gold.xitu.io/entry/5722c838128fe100601dc3a8)|校对|1| |[ECMAScript 6 里面的私有变量](http://gold.xitu.io/entry/572c0b2d2e958a00667a081d)|校对|1| -## 译者:[DeadLion](https://github.com/DeadLion) 历史贡献积分:84 当前积分:14 +## 译者:[DeadLion](https://github.com/DeadLion) 历史贡献积分:85.5 当前积分:15.5 年度积分:1.5 |文章|类型|积分| |------|-------|-------| +|[数据流的不同应用场景 — Java](https://juejin.im/post/5c2c285fe51d4522ec5a2795)|校对|1.5| |[2017 年 9 月兑 树莓派 1 个]()|减去积分|40| |[搭建账户系统](https://juejin.im/post/59b2708b5188257e8a30842f)|校对|2| |[GraphQL vs. REST](https://juejin.im/post/59793f625188253ded721c70)|校对|2| @@ -5416,10 +5417,11 @@ |[使用 MVI 开发响应式 APP - 第三部分 - 状态减少(state reducer)](https://juejin.im/post/5a955c50f265da4e853d856a)|翻译|4| |[二十年后比特币会变成什么样?- 第二部分](https://juejin.im/post/5a955721f265da4e826377b6)|翻译|6| -## 译者:[Starriers](https://github.com/Starriers) 历史贡献积分:429.5 当前积分:399.5 年度积分:429.5 +## 译者:[Starriers](https://github.com/Starriers) 历史贡献积分:433 当前积分:403 年度积分:433 |文章|类型|积分| |------|-------|-------| +|[数据流的不同应用场景 — Java](https://juejin.im/post/5c2c285fe51d4522ec5a2795)|翻译|3.5| |[误解 ES6 模块,升级 Babel 的一个解决方案(泪奔)](https://juejin.im/post/5c223f4ce51d452626296b5d)|翻译|3.5| |[通过集成学习提高机器学习效果](https://juejin.im/post/5c0909d951882548e93806e0)|翻译|5| |[如何使用 Dask Dataframes 在 Python 中运行并行数据分析](https://juejin.im/post/5c1feeaf5188257f9242b65c)|翻译|4| @@ -5515,10 +5517,11 @@ |[json — JavaScript 对象表示法](https://juejin.im/post/5a9432ae5188257a5c6092b0)|校对|1| |[嵌套三元表达式棒极了(软件编写)(第十四部分)](https://juejin.im/post/5a7d6769f265da4e7e10ad82)|校对|1| -## 译者:[zhmhhu](https://github.com/zhmhhu) 历史贡献积分:102 当前积分:2 年度积分:102 +## 译者:[zhmhhu](https://github.com/zhmhhu) 历史贡献积分:105.5 当前积分:5.5 年度积分:105.5 |文章|类型|积分| |------|-------|-------| +|[产品管理思维模式适合每一个人](https://juejin.im/post/5c2c266ae51d4511fb7db0c7)|校对|3.5| |[支持向量机(SVM) 教程](http://5a77c24cf265da4e747f92e8/)|翻译|11| |推荐优秀英文文章一篇|奖励|1| |2018 年 12 月兑树莓派套餐 1 个|减去积分|55| @@ -5540,10 +5543,11 @@ |[在 V8 引擎中设置原型(prototypes)](https://juejin.im/post/5a9921e76fb9a028bd4bc3c4)|校对|1| |[json — JavaScript 对象表示法](https://juejin.im/post/5a9432ae5188257a5c6092b0)|校对|1| -## 译者:[rydensun](https://github.com/rydensun) 历史贡献积分:74 当前积分:64 年度积分:74 +## 译者:[rydensun](https://github.com/rydensun) 历史贡献积分:80 当前积分:70 年度积分:80 |文章|类型|积分| |------|-------|-------| +|[快速原型设计的新手指南](https://juejin.im/user/585b9407da2f6000657a5c0c)|翻译|6| |[我是如何在谷歌找到 UX 设计的工作的](https://juejin.im/post/5bea544ff265da6112048e3c)|翻译|4| |[作为自由开发者,7 个步骤让你获得更多的客户](https://juejin.im/post/5bd660c26fb9a05ce576e9b7)|翻译|5.5| |推荐优秀英文文章|奖励|0.5| @@ -5753,10 +5757,11 @@ |[让 Apache Cassandra 尾部延迟减小 10 倍(已开源)](https://juejin.im/post/5ac31083f265da239a5fff0c)|翻译|4| |[让我们来简化 UserDefaults 的使用](https://juejin.im/post/5abde324f265da23826e1723)|校对|0.5| -## 译者:[EmilyQiRabbit](https://github.com/EmilyQiRabbit) 历史贡献积分:73 当前积分:53 年度积分:73 +## 译者:[EmilyQiRabbit](https://github.com/EmilyQiRabbit) 历史贡献积分:82 当前积分:62 年度积分:82 |文章|类型|积分| |------|-------|-------| +|[产品管理思维模式适合每一个人](https://juejin.im/post/5c2c266ae51d4511fb7db0c7)|翻译|9| |[为什么我放弃了 React 而转向 Vue](https://juejin.im/post/5c2c27096fb9a049f66c3672)|翻译|6| |[使用 GRAPHQL 构建项目的回顾](https://juejin.im/post/5c18ba5bf265da61715e44ed)|翻译|4| |[Medium 的 GraphQL 服务设计](https://juejin.im/post/5c00dad3f265da617006db4e)|翻译|3| @@ -5913,10 +5918,11 @@ |[React & Redux 顶级开发伴侣](https://juejin.im/post/5acae8dc6fb9a028c06b1c4c)|校对|1| |[拖放库中 React 性能的优化](https://juejin.im/post/5ac31b096fb9a028bc2dedfc)|校对|3| -## 译者:[Hopsken](https://github.com/Hopsken) 历史贡献积分:45 当前积分:39 年度积分:45 +## 译者:[Hopsken](https://github.com/Hopsken) 历史贡献积分:48 当前积分:42 年度积分:48 |文章|类型|积分| |------|-------|-------| +|[一份关于色彩无障碍性产品设计的指南](https://juejin.im/post/5c2c233d6fb9a049bd4266b7)|翻译|3| |[揭开 React Hooks 的神秘面纱:数组解构融成魔法](https://juejin.im/post/5bebd1bbe51d4561ce39a23b)|校对|1.5| |[The JavaScript Tutorial 翻译](https://github.com/xitu/javascript-tutorial-en)|翻译校对|6| |[什么是模块化 CSS?](https://juejin.im/post/5bb6c5195188255c9e02e6f3)|校对|2.5| @@ -6071,10 +6077,11 @@ |[在 Google I/O 2018 观看 Flutter 的正确姿势](https://juejin.im/post/5aebd7166fb9a07ab4587b3f)|翻译|1.5| |[TensorFlow 官方文档翻译](https://github.com/xitu/tensorflow-docs)|翻译校对|7| -## 译者:[kezhenxu94](https://github.com/kezhenxu94) 历史贡献积分:43.5 当前积分:43.5 年度积分:43.5 +## 译者:[kezhenxu94](https://github.com/kezhenxu94) 历史贡献积分:45 当前积分:45 年度积分:45 |文章|类型|积分| |------|-------|-------| +|[数据流的不同应用场景 — Java](https://juejin.im/post/5c2c285fe51d4522ec5a2795)|校对|1.5| |[如何使用 JavaScript ES6 有条件地构造对象](https://juejin.im/post/5bb47db76fb9a05d071953ea)|校对|0.5| |[深度学习中所需的线性代数知识](https://juejin.im/post/5b19d99ae51d4506d81a7a2f)|校对|1.5| |[一个简单的 ES6 Promises 指南](https://juejin.im/post/5b0eb3b1f265da08f31e770a)|校对|1| @@ -6405,10 +6412,11 @@ |[用不到 200 行的 GO 语言编写您自己的区块链](https://juejin.im/post/5ad95b056fb9a07aa349cd41)|校对|2| |[GAN 的 Keras 实现:构建图像去模糊应用](https://juejin.im/post/5ad6e358f265da237b229bb2)|校对|1| -## 译者:[Moosphan](https://github.com/Moosphan) 历史贡献积分:8.5 当前积分:8.5 年度积分:8.5 +## 译者:[Moosphan](https://github.com/Moosphan) 历史贡献积分:12.5 当前积分:12.5 年度积分:12.5 |文章|类型|积分| |------|-------|-------| +|[Android 中的 MVP:如何使 Presenter 层系统化](https://juejin.im/post/5c203323f265da6110370dec)|翻译|4| |[当 Kotlin 中的监听器包含多个方法时,如何让它 “巧夺天工”?](https://juejin.im/post/5c1e43646fb9a04a102f45ab)|翻译|3.5| |推荐优秀英文文章两篇|奖励|2| |[带你领略 ConstraintLayout 1.1 的新功能](https://juejin.im/post/5b013e6f51882542c760dc7b)|翻译|3| @@ -6870,10 +6878,11 @@ |[TensorFlow 官方文档翻译](https://github.com/xitu/tensorflow-docs)|翻译校对|8| |[用 Scikit-Learn 实现 SVM 和 Kernel SVM](https://juejin.im/post/5b7fd39af265da43831fa136)|校对|2| -## 译者:[TrWestdoor](https://github.com/TrWestdoor) 历史贡献积分:17.5 当前积分:17.5 年度积分:17.5 +## 译者:[TrWestdoor](https://github.com/TrWestdoor) 历史贡献积分:23 当前积分:23 年度积分:23 |文章|类型|积分| |------|-------|-------| +|[无容器下的云计算](https://juejin.im/post/5c24800a518825673b02dcfe)|翻译|5.5| |[通过集成学习提高机器学习效果](https://juejin.im/post/5c0909d951882548e93806e0)|校对|1.5| |[支持向量机(SVM) 教程](http://5a77c24cf265da4e747f92e8/)|校对|3.5| |[我无法想象没有 Git 别名的的场景](https://juejin.im/post/5c207bd4e51d452b7b032cf6)|校对|1.5| @@ -7145,10 +7154,12 @@ |[以面试官的角度来看 React 工作面试](https://juejin.im/post/5bca74cfe51d450e9163351b)|校对|1.5| |[你需要知道的所有 Flexbox 排列方式](https://juejin.im/post/5bc728f2f265da0aef4e3f6d)|校对|3.5| -## 译者:[Ivocin](https://github.com/Ivocin) 历史贡献积分:37 当前积分:37 年度积分:37 +## 译者:[Ivocin](https://github.com/Ivocin) 历史贡献积分:40.5 当前积分:40.5 年度积分:40.5 |文章|类型|积分| |------|-------|-------| +|[一份关于色彩无障碍性产品设计的指南](https://juejin.im/post/5c2c233d6fb9a049bd4266b7)|校对|1| +|[快速原型设计的新手指南](https://juejin.im/user/585b9407da2f6000657a5c0c)|校对|2.5| |[5 款工具助力 React 快速开发](https://juejin.im/post/5c242e3f51882573d90678ad)|翻译|4.5| |[理解 React Render Props 和 HOC](https://juejin.im/post/5c1f8ded6fb9a049b506ce94)|校对|1.5| |[写给 React 开发者的自定义元素指南](https://juejin.im/post/5c0873a8e51d451de96890dc)|校对|3| @@ -7249,10 +7260,12 @@ |------|-------|-------| |修订文章 https://github.com/xitu/gold-miner/pull/4753|奖励|2| -## 译者:[nanjingboy](https://github.com/nanjingboy) 历史贡献积分:40.5 当前积分:40.5 年度积分:40.5 +## 译者:[nanjingboy](https://github.com/nanjingboy) 历史贡献积分:44.5 当前积分:44.5 年度积分:44.5 |文章|类型|积分| |------|-------|-------| +|[值类型导向编程](https://juejin.im/post/5c2c3f8d518825480635db8b)|校对|1| +|[值类型导向编程](https://juejin.im/post/5c2c3f8d518825480635db8b)|翻译|3| |[时间序列异常检测算法](https://juejin.im/post/5c19f4cb518825678a7bad4c)|校对|1.5| |[Kotlin 协程高级使用技巧](https://juejin.im/post/5c0f11986fb9a049be5d53eb)|翻译|2.5| |[同时使用多个相机流](https://juejin.im/post/5c1071ece51d4570b57af8c8)|校对|1.5| @@ -7310,10 +7323,11 @@ |------|-------|-------| |[使用 Swift 的 iOS 设计模式(第一部分)](https://juejin.im/post/5c05d4ee5188250ab14e62d6)|校对|3.5| -## 译者:[DevMcryYu](https://github.com/DevMcryYu) 历史贡献积分:12 当前积分:12 年度积分:12 +## 译者:[DevMcryYu](https://github.com/DevMcryYu) 历史贡献积分:20 当前积分:20 年度积分:20 |文章|类型|积分| |------|-------|-------| +|[MDC-102 Flutter:Material 结构和布局(Flutter)](https://juejin.im/post/5c24504d518825124e2767fc)|翻译|8| |[使用 Flutter,Material Theming 和官方材质组件(MDC)构建美观,灵活的用户界面](https://juejin.im/post/5c07d8a7518825778a56b80f)|翻译|3.5| |[MDC-101 Flutter:Material Components(MDC)基础(Flutter)](https://juejin.im/post/5c1758e6e51d451a77161ab5)|翻译|7| |[Google Colab 免费 GPU 使用教程](https://juejin.im/post/5c05e1bc518825689f1b4948)|校对|1.5| @@ -7388,18 +7402,21 @@ |------|-------|-------| |[使用自定义文件模板加快你的应用开发速度](https://juejin.im/post/5c204bcdf265da611b585bcd)|校对|1.5| -## 译者:[Qiuk17](https://github.com/Qiuk17) 历史贡献积分:5.5 当前积分:5.5 年度积分:5.5 +## 译者:[Qiuk17](https://github.com/Qiuk17) 历史贡献积分:7 当前积分:7 年度积分:7 |文章|类型|积分| |------|-------|-------| +|[Android 中的 MVP:如何使 Presenter 层系统化](https://juejin.im/post/5c203323f265da6110370dec)|校对|1.5| |[以太坊入门指南](https://juejin.im/post/5c1080fbe51d452b307969a3)|校对|1.5| |[了解 Android 的矢量图片格式:`VectorDrawable`](https://juejin.im/post/5c1a21ff5188252eb759600e)|校对|2.5| |[当 Kotlin 中的监听器包含多个方法时,如何让它 “巧夺天工”?](https://juejin.im/post/5c1e43646fb9a04a102f45ab)|校对|1.5| -## 译者:[gs666](https://github.com/gs666) 历史贡献积分:9 当前积分:9 年度积分:9 +## 译者:[gs666](https://github.com/gs666) 历史贡献积分:12.5 当前积分:12.5 年度积分:12.5 |文章|类型|积分| |------|-------|-------| +|[Android 中的 MVP:如何使 Presenter 层系统化](https://juejin.im/post/5c203323f265da6110370dec)|校对|1.5| +|[MDC-102 Flutter:Material 结构和布局(Flutter)](https://juejin.im/post/5c24504d518825124e2767fc)|校对|2| |[以太坊入门指南](https://juejin.im/post/5c1080fbe51d452b307969a3)|翻译|4| |[同时使用多个相机流](https://juejin.im/post/5c1071ece51d4570b57af8c8)|校对|2| |[Android 内核控制流完整性](https://juejin.im/post/5c1740dcf265da614a3a66c1)|校对|1.5| @@ -7428,3 +7445,9 @@ |文章|类型|积分| |------|-------|-------| |[误解 ES6 模块,升级 Babel 的一个解决方案(泪奔)](https://juejin.im/post/5c223f4ce51d452626296b5d)|校对|1.5| + +## 译者:[tonylua](https://github.com/tonylua) 历史贡献积分:2.5 当前积分:2.5 年度积分:2.5 + +|文章|类型|积分| +|------|-------|-------| +|[无容器下的云计算](https://juejin.im/post/5c24800a518825673b02dcfe)|校对|2.5| From b49ee465b2d7ac8341718ba869bf13fd6c0f038d Mon Sep 17 00:00:00 2001 From: LeviDing Date: Wed, 2 Jan 2019 14:25:28 +0800 Subject: [PATCH 16/54] =?UTF-8?q?=E6=9B=B4=E6=96=B0=2012=20=E6=9C=88?= =?UTF-8?q?=E4=BB=BD=E7=BF=BB=E8=AF=91=E8=AE=A1=E5=88=92=E6=95=B0=E6=8D=AE?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index e5f3540a3a1..868306b114c 100644 --- a/README.md +++ b/README.md @@ -5,9 +5,9 @@ [![](https://img.shields.io/badge/weibo-%E6%8E%98%E9%87%91%E7%BF%BB%E8%AF%91%E8%AE%A1%E5%88%92-brightgreen.svg)](http://weibo.com/juejinfanyi) [![](https://img.shields.io/badge/%E7%9F%A5%E4%B9%8E%E4%B8%93%E6%A0%8F-%E6%8E%98%E9%87%91%E7%BF%BB%E8%AF%91%E8%AE%A1%E5%88%92-blue.svg)](https://zhuanlan.zhihu.com/juejinfanyi) -[掘金翻译计划](https://juejin.im/tag/%E6%8E%98%E9%87%91%E7%BF%BB%E8%AF%91%E8%AE%A1%E5%88%92) 是一个翻译优质互联网技术文章的社区,文章来源为 [掘金](https://juejin.im) 上的英文分享文章。内容覆盖[人工智能](#ai--deep-learning--machine-learning)、[Android](#android)、[iOS](#ios)、[React](#react)、[前端](#前端)、[后端](#后端)、[产品](#产品)、[设计](#设计) 等领域,读者为热爱新技术的新锐开发者。 +[掘金翻译计划](https://juejin.im/tag/%E6%8E%98%E9%87%91%E7%BF%BB%E8%AF%91%E8%AE%A1%E5%88%92) 是一个翻译优质互联网技术文章的社区,文章来源为 [掘金](https://juejin.im) 上的英文分享文章。内容覆盖[区块链](#区块链)、[人工智能](#ai--deep-learning--machine-learning)、[Android](#android)、[iOS](#ios)、[前端](#前端)、[后端](#后端)、[设计](#设计)、[产品](#产品)和[其他](#其他) 等领域,以及各大型优质 [官方文档及手册](#官方文档及手册),读者为热爱新技术的新锐开发者。 -掘金翻译计划目前翻译完成 [1323](#近期文章列表) 篇文章,官方文档及手册 [13](#官方文档及手册) 个,共有近 [1000](https://github.com/xitu/gold-miner/wiki/%E8%AF%91%E8%80%85%E7%A7%AF%E5%88%86%E8%A1%A8) 名译者贡献翻译和校对。 +掘金翻译计划目前翻译完成 [1369](#近期文章列表) 篇文章,官方文档及手册 [13](#官方文档及手册) 个,共有 [1000](https://github.com/xitu/gold-miner/wiki/%E8%AF%91%E8%80%85%E7%A7%AF%E5%88%86%E8%A1%A8) 余名译者贡献翻译和校对。 > ## [🥇掘金翻译计划 — 区块链分舵](https://github.com/xitu/blockchain-miner) From 9bc7cafeb6e2c953ab7115fca0fc000586c64397 Mon Sep 17 00:00:00 2001 From: LeviDing Date: Wed, 2 Jan 2019 14:35:34 +0800 Subject: [PATCH 17/54] =?UTF-8?q?12=20=E6=9C=88=E4=BB=BD=E7=BF=BB=E8=AF=91?= =?UTF-8?q?=E8=AE=A1=E5=88=92=E6=96=87=E7=AB=A0=E5=B1=95=E7=A4=BA=E5=88=97?= =?UTF-8?q?=E8=A1=A8=E6=9B=B4=E6=96=B0?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- README.md | 51 +++++++++++++++++++++++++-------------------------- 1 file changed, 25 insertions(+), 26 deletions(-) diff --git a/README.md b/README.md index 868306b114c..d8634543702 100644 --- a/README.md +++ b/README.md @@ -50,59 +50,58 @@ ## 区块链 +* [美国证券法对 ICO 及相关 Fund 的最新动态](https://juejin.im/post/5c1e03ae6fb9a049fb43a536) ([newraina](https://github.com/newraina) 翻译) +* [以太坊入门指南](https://juejin.im/post/5c1080fbe51d452b307969a3) ([gs666](https://github.com/gs666) 翻译) +* [以太坊入门:互联网政府](https://juejin.im/post/5c03c68851882551236eaa82) ([newraina](https://github.com/newraina) 翻译) * [以太坊: 能帮我们把 Uber 换掉的非比特币加密货币](https://juejin.im/post/5bf3e32ee51d4532ff07a7de) ([noahziheng](https://github.com/noahziheng) 翻译) -* [如何区分支付型代币,实用型代币,证券化代币](https://juejin.im/post/5bf53b8f51882517172700c8) ([mingxing47](https://github.com/mingxing47) 翻译) -* [什么是以太坊?以太坊初学者手把手教程](https://juejin.im/post/5ba850a36fb9a05d0b14369f) ([cdpath](https://github.com/cdpath) 翻译) -* [ELI5:用简单的语言给小白解释什么是以太坊?](https://juejin.im/post/5bb070b16fb9a05ce02a8a26) ([mingxing47] * [所有区块链译文>>](https://github.com/xitu/gold-miner/blob/master/blockchain.md) ## 人工智能 -* [深度学习将会给我们所有人的生活一个教训:工作是为了机器准备的](https://juejin.im/post/5bd71fd6f265da0aa94a5bce) ([yuwhuawang](https://github.com/yuwhuawang) 翻译) -* [初创公司的数据科学:简介](https://juejin.im/post/5bd55b76f265da0ae472ce1b) ([tmpbook](https://github.com/tmpbook) 翻译) -* [在 Keras 中使用一维卷积神经网络处理时间序列数据](https://juejin.im/post/5beb7432f265da61524cf27c) ([haiyang-tju](https://github.com/haiyang-tju) 翻译) -* [使用 Python 的 Pandas 和 Seaborn 框架从 Kaggle 数据集中提取信息](https://juejin.im/post/5be8caf651882551cc25acf5) ([haiyang-tju](https://github.com/haiyang-tju) 翻译) +* [如何使用 Dask Dataframes 在 Python 中运行并行数据分析](https://juejin.im/post/5c1feeaf5188257f9242b65c) ([Starriers](https://github.com/Starriers) 翻译) +* [时间序列异常检测算法](https://juejin.im/post/5c19f4cb518825678a7bad4c) ([haiyang-tju](https://github.com/haiyang-tju) 翻译) +* [支持向量机(SVM) 教程](http://5a77c24cf265da4e747f92e8/) ([zhmhhu](https://github.com/zhmhhu) 翻译) +* [通过集成学习提高机器学习效果](https://juejin.im/post/5c0909d951882548e93806e0) ([Starriers](https://github.com/Starriers) 翻译) * [所有 AI 译文>>](https://github.com/xitu/gold-miner/blob/master/AI.md) ## Android -* [为用户提供安全可靠的体验](https://juejin.im/post/5bf66114e51d45229468d659) ([YueYongDev](https://github.com/YueYongDev) 翻译) -* [在 Android 上实现 Google Inbox 的样式动画](https://juejin.im/post/5bee3a45e51d451dca475a43) ([YueYongDev](https://github.com/YueYongDev) 翻译) -* [回答有关 Flutter App 开发的问题](https://juejin.im/post/5be98784518825170200254e) ([YueYongDev](https://github.com/YueYongDev) 翻译) -* [正确实现 linkedPurchaseToken 以避免重复订阅](https://juejin.im/post/5baf9a3e6fb9a05ce2741437) ([yuwhuawang](https://github.com/yuwhuawang) 翻译) +* [使用自定义文件模板加快你的应用开发速度](https://juejin.im/post/5c204bcdf265da611b585bcd) ([nanjingboy](https://github.com/nanjingboy) 翻译) +* [当 Kotlin 中的监听器包含多个方法时,如何让它 “巧夺天工”?](https://juejin.im/post/5c1e43646fb9a04a102f45ab) ([Moosphan](https://github.com/Moosphan) 翻译) +* [了解 Android 的矢量图片格式:`VectorDrawable`](https://juejin.im/post/5c1a21ff5188252eb759600e) ([HarderChen](https://github.com/HarderChen) 翻译) +* [MDC-101 Flutter:Material Components(MDC)基础(Flutter)](https://juejin.im/post/5c1758e6e51d451a77161ab5) ([DevMcryYu](https://github.com/DevMcryYu) 翻译) * [所有 Android 译文>>](https://github.com/xitu/gold-miner/blob/master/android.md) ## iOS +* [使用 Swift 的 iOS 设计模式(第一部分)](https://juejin.im/post/5c05d4ee5188250ab14e62d6) ([iWeslie](https://github.com/iWeslie) 翻译) +* [使用 Swift 的 iOS 设计模式(第二部分)](https://juejin.im/post/5c1786576fb9a049f06a2c4a) ([iWeslie](https://github.com/iWeslie) 翻译) +* [使用 Kotlin 将你的 iOS 应用程序转换为 Android](https://juejin.im/post/5c03f64ce51d454af013d076) ([iWeslie](https://github.com/iWeslie) 翻译) * [Swift 中的动态特性](https://juejin.im/post/5bfd087be51d457a013940e8) ([iWeslie](https://github.com/iWeslie) 翻译) -* [介绍适用于 iOS 的 AloeStackView](https://juejin.im/post/5bf22a05f265da61783106de) ([LoneyIsError](https://github.com/LoneyIsError) 翻译) -* [iOS 12 占有率超过 50%,超过了 iOS 11](https://juejin.im/post/5bf64ad851882579117f74ae) ([LoneyIsError](https://github.com/LoneyIsError) 翻译) -* [从现有的代码库创建 Swift 包管理器](https://juejin.im/post/5bec2b735188253b6e5c132a) ([iWeslie](https://github.com/iWeslie) 翻译) * [所有 iOS 译文>>](https://github.com/xitu/gold-miner/blob/master/ios.md) ## 前端 -* [你不知道的 console 命令](https://juejin.im/post/5bf64218e51d45194266acb7) ([Pomelo1213](https://github.com/Pomelo1213) 翻译) -* [理解 JavaScript 中的 undefined](https://juejin.im/post/5bf57e8ef265da612c5d8439) ([yanyixin](https://github.com/yanyixin) 翻译) -* [Javascript: call(), apply() 和 bind()](https://juejin.im/post/5bee3adef265da614c4c612e) ([YueYongDev](https://github.com/YueYongDev) 翻译) -* [关于 Angular 的变化检测,你需要知道的一切](https://juejin.im/post/5bf405f851882530d44b400a) ([tian-li](https://github.com/tian-li) 翻译) -* [我们是怎样把 Carousell 的移动 Web 体验搞快了 3 倍的?](https://juejin.im/post/5bee858ae51d45710c6a5500) ([noahziheng](https://github.com/noahziheng) 翻译) +* [创建并发布一个小而美的 npm 包](https://juejin.im/post/5c26c1b65188252dcb312ad6) ([calpa](https://github.com/calpa) 翻译) +* [React 路由和 React 组件的爱恨情仇](https://juejin.im/post/5c2217abe51d4570f1453cad) ([Augustwuli](https://github.com/Augustwuli) 翻译) +* [误解 ES6 模块,升级 Babel 的一个解决方案(泪奔)](https://juejin.im/post/5c223f4ce51d452626296b5d) ([Starriers](https://github.com/Starriers) 翻译) +* [继承 JavaScript 类中的静态属性](https://juejin.im/post/5c2217fc6fb9a049b348039d) ([Augustwuli](https://github.com/Augustwuli) 翻译) * [所有前端译文>>](https://github.com/xitu/gold-miner/blob/master/front-end.md) ## 后端 +* [无容器下的云计算](https://juejin.im/post/5c24800a518825673b02dcfe) ([TrWestdoor](https://github.com/TrWestdoor) 翻译) +* [如何在六个月或更短的时间内成为 DevOps 工程师,第四部分:打包](https://juejin.im/post/5c19d6255188252ea66b33b3) ([Raoul1996](https://github.com/Raoul1996) 翻译) +* [使用 NodeJS 创建一个 GraphQL 服务器](https://juejin.im/post/5c015a5af265da612577d89a) ([Raoul1996](https://github.com/Raoul1996) 翻译) * [Medium 的 GraphQL 服务设计](https://juejin.im/post/5c00dad3f265da617006db4e) ([EmilyQiRabbit](https://github.com/EmilyQiRabbit) 翻译) -* [关于 HTTP/3 的一些心得](https://juejin.im/post/5bfb519ef265da610f636596) ([Starriers](https://github.com/Starriers) 翻译) -* [用 Flask 输出视频流](https://juejin.im/post/5bea86fc518825158c531e9c) ([BriFuture](https://github.com/BriFuture) 翻译) -* [Rust 开发完整的 Web 应用程序](https://juejin.im/post/5bd66dee6fb9a05cdb1081ca) ([Raoul1996](https://github.com/Raoul1996) 翻译) * [所有后端译文>>](https://github.com/xitu/gold-miner/blob/master/backend.md) ## 设计 +* [快速原型设计的新手指南](https://juejin.im/user/585b9407da2f6000657a5c0c) ([rydensun](https://github.com/rydensun) 翻译) * [我是如何在谷歌找到 UX 设计的工作的](https://juejin.im/post/5bea544ff265da6112048e3c) ([rydensun](https://github.com/rydensun) 翻译) * [设计师的决策树](https://juejin.im/post/5befd61ee51d4557fe34e944) ([zhmhhu](https://github.com/zhmhhu) 翻译) * [如何创建一个设计体系来赋能团队 —— 关注人,而非像素](https://juejin.im/post/5bac2a2fe51d450e942f4853) ([pmwangyang](https://github.com/pmwangyang) 翻译) -* [另外 5 种关于视觉和认知间差异的绘画练习](https://juejin.im/post/5baa5b45f265da0ab915cb7f) ([Ruixi](https://github.com/Ruixi) 翻译) * [所有设计译文>>](https://github.com/xitu/gold-miner/blob/master/design.md) ## 产品 @@ -115,10 +114,10 @@ ## 其他 +* [我无法想象没有 Git 别名的的场景](https://juejin.im/post/5c207bd4e51d452b7b032cf6) ([Starriers](https://github.com/Starriers) 翻译) +* [三人研发小组的高效研发尝试](https://juejin.im/post/5c19d1846fb9a049f06a33fc) ([yuwhuawang](https://github.com/yuwhuawang) 翻译) +* [理解编译器 — 从人类的角度(版本 2)](https://juejin.im/post/5c10b2f6e51d452ad958631f) ([Starriers](https://github.com/Starriers) 翻译) * [深度专注的工作 — 成为 10 倍效率的开发者的秘密武器](https://juejin.im/post/5bffb3f5f265da613a53bd4b) ([tmpbook](https://github.com/tmpbook) 翻译) -* [如何让高效的代码评审成为一种文化](https://juejin.im/post/5bfc9ff9e51d454b6c371f5d) ([CoolRice](https://github.com/CoolRice) 翻译) -* [在远程工作中领悟到的 10 件事](https://juejin.im/post/5bf7a79f51882511a8528cf0) ([Starriers](https://github.com/Starriers) 翻译) -* [强化学习中的好奇心与拖延症](https://juejin.im/post/5bff316651882548e937ef20) ([haiyang-tju](https://github.com/haiyang-tju) 翻译) * [所有其他分类译文>>](https://github.com/xitu/gold-miner/blob/master/others.md) # Copyright From 8f90befb0c759ebe6db7789e0e6cb13d6ea509e8 Mon Sep 17 00:00:00 2001 From: LeviDing Date: Wed, 2 Jan 2019 20:45:28 +0800 Subject: [PATCH 18/54] Create understanding-asynchronous-javascript-the-event-loop.md --- ...-asynchronous-javascript-the-event-loop.md | 338 ++++++++++++++++++ 1 file changed, 338 insertions(+) create mode 100644 TODO1/understanding-asynchronous-javascript-the-event-loop.md diff --git a/TODO1/understanding-asynchronous-javascript-the-event-loop.md b/TODO1/understanding-asynchronous-javascript-the-event-loop.md new file mode 100644 index 00000000000..2111104f79d --- /dev/null +++ b/TODO1/understanding-asynchronous-javascript-the-event-loop.md @@ -0,0 +1,338 @@ +> * 原文地址:[Understanding Asynchronous JavaScript](https://blog.bitsrc.io/understanding-asynchronous-javascript-the-event-loop-74cd408419ff) +> * 原文作者:[Sukhjinder Arora](https://blog.bitsrc.io/@Sukhjinder?source=post_header_lockup) +> * 译文出自:[掘金翻译计划](https://github.com/xitu/gold-miner) +> * 本文永久链接:[https://github.com/xitu/gold-miner/blob/master/TODO1/understanding-asynchronous-javascript-the-event-loop.md](https://github.com/xitu/gold-miner/blob/master/TODO1/understanding-asynchronous-javascript-the-event-loop.md) +> * 译者: +> * 校对者: + +# Understanding Asynchronous JavaScript + +Learn How JavaScript Works + +![](https://cdn-images-1.medium.com/max/2000/0*wO-kYdN93deiT0U9) + +Photo by [Sean Lim](https://unsplash.com/@sean1188?utm_source=medium&utm_medium=referral) on [Unsplash](https://unsplash.com?utm_source=medium&utm_medium=referral) + +JavaScript is a single-threaded programming language which means only one thing can happen at a time. That is, the JavaScript engine can only process one statement at a time in a single thread. + +While the single-threaded languages simplify writing code because you don’t have to worry about the concurrency issues, this also means you can’t perform long operations such as network access without blocking the main thread. + +Imagine requesting some data from an API. Depending upon the situation the server might take some time to process the request while blocking the main thread making the web page unresponsive. + +That’s where asynchronous JavaScript comes into play. Using asynchronous JavaScript (such as callbacks, promises, and async/await), you can perform long network requests without blocking the main thread. + +While it’s not necessary that you learn all these concepts to be an awesome JavaScript developer, it’s helpful to know :) + +So without further ado, Let’s get started :) + +**Tip**: Using [**Bit**](https://github.com/teambit/bit) you can turn any JS code into an API you can share, use and sync across projects and apps to build faster and reuse more code. Give it a try. + +- [**Bit - Share and build with code components**: Bit helps you share, discover and use code components between projects and applications to build new features and...](https://bitsrc.io "https://bitsrc.io") + +* * * + +### How Does Synchronous JavaScript Work? + +Before we dive into asynchronous JavaScript, let’s first understand how the synchronous JavaScript code executes inside the JavaScript engine. For example: + +``` +const second = () => { + console.log('Hello there!'); +} + +const first = () => { + console.log('Hi there!'); + second(); + console.log('The End'); +} + +first(); +``` + +To understand how the above code executes inside the JavaScript engine, we have to understand the concept of the execution context and the call stack (also known as execution stack). + +#### Execution Context + +An Execution Context is an abstract concept of an environment where the JavaScript code is evaluated and executed. Whenever any code is run in JavaScript, it’s run inside an execution context. + +The function code executes inside the function execution context, and the global code executes inside the global execution context. Each function has its own execution context. + +#### Call Stack + +The call stack as its name implies is a stack with a LIFO (Last in, First out) structure, which is used to store all the execution context created during the code execution. + +JavaScript has a single call stack because it’s a single-threaded programming language. The call stack has a LIFO structure which means that the items can be added or removed from the top of the stack only. + +Let’s get back to the above code snippet and try to understand how the code executes inside the JavaScript engine. + +``` +const second = () => { + console.log('Hello there!'); +} + +const first = () => { + console.log('Hi there!'); + second(); + console.log('The End'); +} + +first(); +``` + +![](https://cdn-images-1.medium.com/max/1000/1*DkG1a8f7rdl0GxM0ly4P7w.png) + +Call Stack for the above code + +#### So What’s Happening Here? + +When this code is executed, a global execution context is created (represented by `main()`) and pushed to the top of the call stack. When a call to `first()` is encountered, it’s pushed to the top of the stack. + +Next, `console.log('Hi there!')` is pushed to the top of the stack, when it finishes, it’s popped off from the stack. After it, we call `second()`, so the `second()` function is pushed to the top of the stack. + +`console.log('Hello there!')` is pushed to the top of the stack and popped off the stack when it finishes. The `second()` function finishes, so it’s popped off the stack. + +`console.log(‘The End’)` is pushed to the top of the stack and removed when it finishes. After it, the `first()` function completes, so it’s removed from the stack. + +The program completes its execution at this point, so the global execution context(`main()`) is popped off from the stack. + +### How Does Asynchronous JavaScript Work? + +Now that we have a basic idea about the call stack, and how the synchronous JavaScript works, let’s get back to the asynchronous JavaScript. + +#### What is Blocking? + +Let’s suppose we are doing an image processing or a network request in a synchronous way. For example: + +``` +const processImage = (image) => { + /** + * doing some operations on image + **/ + console.log('Image processed'); +} + +const networkRequest = (url) => { + /** + * requesting network resource + **/ + return someData; +} + +const greeting = () => { + console.log('Hello World'); +} + +processImage(logo.jpg); +networkRequest('www.somerandomurl.com'); +greeting(); +``` + +Doing image processing and network request takes time. So when `processImage()` function is called, it’s going to take some time depending on the size of the image. + +When the `processImage()` function completes, it’s removed from the stack. After that the `networkRequest()` function is called and pushed to the stack. Again it’s also going to take some time to finish execution. + +At last when the `networkRequest()` function completes, `greeting()` function is called and since it contains only a `console.log` statement and `console.log` statements are generally fast, so the `greeting()` function is immediately executed and returned. + +So you see, we have to wait until the function (such as `processImage()` or `networkRequest()`) has finished. This means these functions are blocking the call stack or main thread. So we can’t perform any other operation while the above code is executing which is not ideal. + +#### So what’s the solution? + +The simplest solution is asynchronous callbacks. We use asynchronous callbacks to make our code non-blocking. For example: + +``` +const networkRequest = () => { + setTimeout(() => { + console.log('Async Code'); + }, 2000); +}; + +console.log('Hello World'); + +networkRequest(); +``` + +Here I have used `setTimeout` method to simulate the network request. Please keep in mind that the `setTimeout` is not a part of the JavaScript engine, it’s a part of something known as web APIs (in browsers) and C/C++ APIs (in node.js). + +To understand how this code is executed we have to understand a few more concepts such event loop and the callback queue (also known as task queue or the message queue). + +![](https://cdn-images-1.medium.com/max/800/1*O_H6XRaDX9FaC4Q9viiRAA.png) + +An Overview of JavaScript Runtime Environment + +The **event loop**, the **web APIs** and the **message queue**/**task queue** are not part of the JavaScript engine, it’s a part of browser’s JavaScript runtime environment or Nodejs JavaScript runtime environment (in case of Nodejs). In Nodejs, the web APIs are replaced by the C/C++ APIs. + +Now let’s get back to the above code and see how it’s executed in an asynchronous way. + +``` +const networkRequest = () => { + setTimeout(() => { + console.log('Async Code'); + }, 2000); +}; + +console.log('Hello World'); + +networkRequest(); + +console.log('The End'); +``` + +![](https://cdn-images-1.medium.com/max/800/1*sOz5cj-_Jjv23njWg_-uGA.gif) + +Event Loop + +When the above code loads in the browser, the `console.log(‘Hello World’)` is pushed to the stack and popped off the stack after it’s finished. Next, a call to `networkRequest()` is encountered, so it’s pushed to the top of the stack. + +Next `setTimeout()` function is called, so it’s pushed to the top of the stack. The `setTimeout()` has two arguments: 1) callback and 2) time in milliseconds (ms). + +The `setTimeout()` method starts a timer of `2s` in the web APIs environment. At this point, the `setTimeout()` has finished and it’s popped off from the stack. After it, `console.log('The End')` is pushed to the stack, executed and removed from the stack after its completion. + +Meanwhile, the timer has expired, now the callback is pushed to the **message queue**. But the callback is not immediately executed, and that’s where the event loop kicks in. + +#### The Event Loop + +The job of the Event loop is to look into the call stack and determine if the call stack is empty or not. If the call stack is empty, it looks into the message queue to see if there’s any pending callback waiting to be executed. + +In this case, the message queue contains one callback, and the call stack is empty at this point. So the Event loop pushes the callback to the top of the stack. + +After that the `console.log(‘Async Code’)` is pushed to the top of the stack, executed and popped off from the stack. At this point, the callback has finished so it’s removed from the stack and the program finally finishes. + +#### DOM Events + +The **Message queue** also contains the callbacks from the DOM events such as click events and keyboard events. For example: + +``` +document.querySelector('.btn').addEventListener('click',(event) => { + console.log('Button Clicked'); +}); +``` + +In case of DOM events, the event listener sits in the web APIs environment waiting for a certain event (click event in this case) to happen, and when that event happens, then the callback function is placed in the message queue waiting to be executed. + +Again the event loop checks if the call stack is empty and pushes the event callback to the stack if it’s empty and the callback is executed. + +We have learned how the asynchronous callbacks and DOM events are executed which uses the message queue to store all the callbacks waiting to be executed. + +#### ES6 Job Queue/ Micro-Task queue + +ES6 introduced the concept of job queue/micro-task queue which is used by Promises in JavaScript. The difference between the message queue and the job queue is that the job queue has a higher priority than the message queue, which means that promise jobs inside the job queue/ micro-task queue will be executed before the callbacks inside the message queue. + +For example: + +``` +console.log('Script start'); + +setTimeout(() => { + console.log('setTimeout'); +}, 0); + +new Promise((resolve, reject) => { + resolve('Promise resolved'); + }).then(res => console.log(res)) + .catch(err => console.log(err)); + +console.log('Script End'); +``` + +Output: + +``` +Script start +Script End +Promise resolved +setTimeout +``` + +We can see that the promise is executed before the `setTimeout`, because promise response are stored inside the micro-task queue which has a higher priority than the message queue. + +Let’s take another example, this time with two promises and two setTimeout. For example: + +``` +console.log('Script start'); + +setTimeout(() => { + console.log('setTimeout 1'); +}, 0); + +setTimeout(() => { + console.log('setTimeout 2'); +}, 0); + +new Promise((resolve, reject) => { + resolve('Promise 1 resolved'); + }).then(res => console.log(res)) + .catch(err => console.log(err)); + +new Promise((resolve, reject) => { + resolve('Promise 2 resolved'); + }).then(res => console.log(res)) + .catch(err => console.log(err)); + +console.log('Script End'); +``` + +This prints: + +``` +Script start +Script End +Promise 1 resolved +Promise 2 resolved +setTimeout 1 +setTimeout 2 +``` + +We can see that the two promises are executed before the callbacks in the `setTimeout` because the event loop prioritizes the tasks in micro-task queue over the tasks in message queue/task queue. + +While the event loop is executing the tasks in the micro-task queue and in that time if another promise is resolved, it will be added to the end of the same micro-task queue, and it will be executed before the callbacks inside the message queue no matter for how much time the callback is waiting to be executed. + +For example: + +``` +console.log('Script start'); + +setTimeout(() => { + console.log('setTimeout'); +}, 0); + +new Promise((resolve, reject) => { + resolve('Promise 1 resolved'); + }).then(res => console.log(res)); + +new Promise((resolve, reject) => { + resolve('Promise 2 resolved'); + }).then(res => { + console.log(res); + return new Promise((resolve, reject) => { + resolve('Promise 3 resolved'); + }) + }).then(res => console.log(res)); + +console.log('Script End'); +``` + +This prints: + +``` +Script start +Script End +Promise 1 resolved +Promise 2 resolved +Promise 3 resolved +setTimeout +``` + +So all the tasks in micro-task queue will be executed before the tasks in message queue. That is, the event loop will first empty the micro-task queue before executing any callback in the message queue. + +### Conclusion + +So we have learned how asynchronous JavaScript works and other concepts such as call stack, event loop, message queue/task queue and job queue/micro-task queue which together make the JavaScript runtime environment. While it’s not necessary that you learn all these concepts to be an awesome JavaScript developer, but it’s helpful to know these concepts :) + +That’s it and if you found this article helpful, please click the clap 👏button, you can also follow me on [Medium](https://medium.com/@Sukhjinder) and [Twitter](https://twitter.com/sukhjinder_95), and if you have any doubt, feel free to comment! I’d be happy to help :) + +> 如果发现译文存在错误或其他需要改进的地方,欢迎到 [掘金翻译计划](https://github.com/xitu/gold-miner) 对译文进行修改并 PR,也可获得相应奖励积分。文章开头的 **本文永久链接** 即为本文在 GitHub 上的 MarkDown 链接。 + + +--- + +> [掘金翻译计划](https://github.com/xitu/gold-miner) 是一个翻译优质互联网技术文章的社区,文章来源为 [掘金](https://juejin.im) 上的英文分享文章。内容覆盖 [Android](https://github.com/xitu/gold-miner#android)、[iOS](https://github.com/xitu/gold-miner#ios)、[前端](https://github.com/xitu/gold-miner#前端)、[后端](https://github.com/xitu/gold-miner#后端)、[区块链](https://github.com/xitu/gold-miner#区块链)、[产品](https://github.com/xitu/gold-miner#产品)、[设计](https://github.com/xitu/gold-miner#设计)、[人工智能](https://github.com/xitu/gold-miner#人工智能)等领域,想要查看更多优质译文请持续关注 [掘金翻译计划](https://github.com/xitu/gold-miner)、[官方微博](http://weibo.com/juejinfanyi)、[知乎专栏](https://zhuanlan.zhihu.com/juejinfanyi)。 From bb0ae68aaf1a86b9c6d39b69746f380c664c7da6 Mon Sep 17 00:00:00 2001 From: Rickon Date: Fri, 4 Jan 2019 19:22:06 +0800 Subject: [PATCH 19/54] =?UTF-8?q?Android=20=E4=B8=8A=E4=B8=80=E6=AC=A1?= =?UTF-8?q?=E7=BC=96=E5=86=99=EF=BC=8C=E5=88=B0=E5=A4=84=E6=B5=8B=E8=AF=95?= =?UTF-8?q?=20(#4916)?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * Android 上一次编写,到处测试 Android 上一次编写,到处测试 * 根据校对意见进一步修改译文 * Update write-once-run-everywhere-tests-on-android.md --- ...te-once-run-everywhere-tests-on-android.md | 38 +++++++++---------- 1 file changed, 19 insertions(+), 19 deletions(-) diff --git a/TODO1/write-once-run-everywhere-tests-on-android.md b/TODO1/write-once-run-everywhere-tests-on-android.md index 39ab80b60bd..84e2ac01e9f 100644 --- a/TODO1/write-once-run-everywhere-tests-on-android.md +++ b/TODO1/write-once-run-everywhere-tests-on-android.md @@ -2,18 +2,18 @@ > * 原文作者:[Jonathan Gerrish](https://medium.com/@jongerrish?source=post_header_lockup) > * 译文出自:[掘金翻译计划](https://github.com/xitu/gold-miner) > * 本文永久链接:[https://github.com/xitu/gold-miner/blob/master/TODO1/write-once-run-everywhere-tests-on-android.md](https://github.com/xitu/gold-miner/blob/master/TODO1/write-once-run-everywhere-tests-on-android.md) -> * 译者: -> * 校对者: +> * 译者:[Rickon](https://github.com/gs666) +> * 校对者:[xiaxiayang](https://github.com/xiaxiayang) -# Write Once, Run Everywhere Tests on Android +# Android 上一次编写,随处测试 ![](https://cdn-images-1.medium.com/max/800/1*xNQHxXBX-1RQCPM3LYa3wA.png) -At Google I/O this year, we launched AndroidX Test, part of [Jetpack](https://developer.android.com/jetpack/). Today we’re happy to announce the release of [v1.0.0](https://developer.android.com/training/testing/release-notes) Final alongside Robolectric v4.0. As part of the 1.0.0 release, all of AndroidX Test is now [open source](https://github.com/android/android-test). +在今年的 Google I/O 大会上,我们推出了 AndroidX Test,作为 [Jetpack](https://developer.android.com/jetpack/) 的一部分。今天,我们很高兴地宣布 [v1.0.0](https://developer.android.com/training/testing/release-notes) 最终版本和 Robolectric v4.0 一起发布。作为 1.0.0 版本的一部分,所有 AndroidX Test 现在都是[开源的](https://github.com/android/android-test)。 -AndroidX Test provides common test APIs across test environments including instrumentation and Robolectric tests. It includes the existing Android JUnit 4 support, the Espresso view interaction library, and several new key testing APIs. These APIs are available for instrumentation tests on real and virtual devices. As of Robolectric 4.0, they are available for local JVM tests, too. +AndroidX Test 提供了跨测试环境的通用测试 APIs,包括仪器测试和 Robolectric 测试。它包括现有的 Android JUnit 4 支持,Espresso 视图交互库和几个新的密钥测试 APIs。这些 APIs 可用于在真实和虚拟设备上进行仪器测试。从 Robolectric 4.0 开始,它们也可用于本地 JVM 测试。 -Consider the following use case where we launch the login screen, enter a valid username and password, and make sure we’re taken to the home screen. +考虑以下使用情形,我们启动登录页面,输入正确的用户名和密码,并确保进入主屏幕。 ``` @RunWith(AndroidJUnit4::class) @@ -37,17 +37,17 @@ class LoginActivityTest { } ``` -Lets step through the test: +让我们逐步完成测试: -1. We use the new [ActivityScenario](https://developer.android.com/reference/androidx/test/core/app/ActivityScenario) API to launch the LoginActivity. This creates the activity and brings it to the resumed state, where it is visible to the user and ready for input. ActivityScenario handles all the synchronization with the system and provides support for common scenarios you should be testing such as how your app handles being destroyed and recreated by the system. +1. 我们使用新的 [ActivityScenario](https://developer.android.com/reference/androidx/test/core/app/ActivityScenario) API 来启动 LoginActivity。它将会创建一个 activity,并进入用户可见并能够输入的 resumed 状态。ActivityScenario 处理与系统的所有同步,并为你应测试的常见场景提供支持,例如你的应用如何处理被系统销毁和重建。 -2. We use the Espresso view interaction library to enter text into two text fields and click a button in the UI. Similar to ActivityScenario, Espresso handles multi-threading and synchronization for you and surfaces a readable and fluent API to author tests with. +2. 我们使用 Espresso 视图交互库将文本输入到两个文本字段中,然后点击 UI 中的按钮。与 ActivityScenario 类似,Espresso 为你处理多线程和同步,并提供可读且流畅的 API 以创建测试。 -3. We use the new [Intents.getIntents()](https://developer.android.com/reference/androidx/test/espresso/intent/Intents.html#getIntents%28%29) Espresso API that returns a list of captured intents. We then verify the captured intents using IntentSubject.assertThat(), part of the new Android Truth extensions. The Android Truth extension provides an expressive and readable API to validate states of fundamental Android framework objects. +3. 我们使用新的 [Intents.getIntents()](https://developer.android.com/reference/androidx/test/espresso/intent/Intents.html#getIntents%28%29) Espresso API 来返回捕获的意图列表。然后,我们使用 IntentSubject.assertThat() 验证捕获的意图,这是新的 Android Truth 扩展框架的一部分。Android Truth 扩展框架提供了一个富有表现力和可读性的 API 来验证基本 Android 框架对象的状态。 -This test can run on a local JVM using Robolectric or any physical or virtual device. +这个测试可以在使用 Robolectric 或任何真实或虚拟设备的本地 JVM 上运行。 -To run it on an Android device, place it in your “androidTest” source root along with the following dependencies: +要在 Android 设备上运行它,请将它与以下依赖项一起放在 “androidTest” 资源根目录中: ``` androidTestImplementation(“androidx.test:runner:1.1.0”) @@ -57,9 +57,9 @@ androidTestImplementation(“androidx.test.espresso:espresso-core:3.1.0”) androidTestImplementation(“androidx.test.ext:truth:1.0.0”) ``` -Running on a physical or virtual device gives you confidence that your code interacts with the Android system correctly. As you scale up the number of test cases, however, you start to sacrifice test execution time. You may decide to only run a few larger tests on a real device while running a large number of smaller unit tests on a simulator, such as Robolectric, which can run tests more quickly on a local JVM. +在真实或虚拟设备上运行可让你确信你的代码可以正确地与 Android 系统进行交互。但是,随着测试用例数量的增加,你开始牺牲测试执行时间。你可能决定只在真机上运行一些较大的测试,同时在模拟器上运行大量较小的单元测试,比如 Robolectric,它可以在本地 JVM 上更快地运行测试。 -To run the tests on a local JVM using the Robolectric simulator place the test in the “test” source root, adding the following lines to your gradle.build: +要使用 Robolectric 模拟器在本地 JVM 上运行测试用例,请将测试用例放在 “test” 资源根目录中,将以下代码添加到 gradle.build: ``` testImplementation(“androidx.test:runner:1.1.0”) @@ -74,15 +74,15 @@ android { } ``` -The unification of testing apis between simulators and instrumentation opens up a lot of exciting possibilities! Project Nitrogen, which we also announced at Google I/O, will allow you to seamlessly move tests between runtime environments. This means that you will be able to take tests written against the new AndroidX Test APIs and run them on a local JVM, real or virtual device, or even a cloud based testing platform such as Firebase Test Lab. We are very excited by the opportunities this will provide developers to get fast, accurate, and actionable feedback on the quality of their applications. +模拟器和仪器之间测试 apis 的统一提供了许多令人兴奋的可能性!我们在 Google I / O 上发布的 Nitrogen 项目将允许你在运行时环境之间无缝地切换测试。这意味着你将能够采用针对新的 AndroidX Test APIs 编写的测试用例,并在本地 JVM、真实或虚拟设备、甚至基于云的测试平台(如 Firebase 测试实验室)上运行它们。我们非常高兴有机会为开发人员提供有关其应用程序质量的快速、准确和可操作的反馈。 -Finally, we are happy to announce that all AndroidX components are fully [open sourced](https://github.com/android/android-test) and we look forward to welcoming your contributions. +最后,我们很高兴的宣布所有的 AndroidX 组件是完全 [开源](https://github.com/android/android-test) 的,我们期待着你的贡献。 -### Read more +### 了解更多 -Documentation: [https://developer.android.com/testing](https://developer.android.com/testing) +文档:[https://developer.android.com/testing](https://developer.android.com/testing) -Release notes: +版本注释: * AndroidX Test: [https://developer.android.com/training/testing/release-notes](https://developer.android.com/training/testing/release-notes) * Robolectric: [https://github.com/robolectric/robolectric/releases/](https://github.com/robolectric/robolectric/releases/) From 7bbe49c4dde51195d33fbb9f5c10fbf30d9d7fe9 Mon Sep 17 00:00:00 2001 From: HeHuiQiang Date: Fri, 4 Jan 2019 19:57:09 +0800 Subject: [PATCH 20/54] =?UTF-8?q?=E7=90=86=E8=A7=A3=E5=BC=82=E6=AD=A5=20Ja?= =?UTF-8?q?vaScript=20(#4944)?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * 理解异步 JavaScript * 根据校验者的建议修改了部分内容,将部分翻译不实用自己的理解描述 * 再次修改了部分排版错误 * 将 js 改成 JavaScript * 将照片来源进行英译 * 译文格式修改 * Update understanding-asynchronous-javascript-the-event-loop.md --- ...-asynchronous-javascript-the-event-loop.md | 271 ++++++++---------- 1 file changed, 122 insertions(+), 149 deletions(-) diff --git a/TODO1/understanding-asynchronous-javascript-the-event-loop.md b/TODO1/understanding-asynchronous-javascript-the-event-loop.md index 2111104f79d..d21d9cca6aa 100644 --- a/TODO1/understanding-asynchronous-javascript-the-event-loop.md +++ b/TODO1/understanding-asynchronous-javascript-the-event-loop.md @@ -2,239 +2,219 @@ > * 原文作者:[Sukhjinder Arora](https://blog.bitsrc.io/@Sukhjinder?source=post_header_lockup) > * 译文出自:[掘金翻译计划](https://github.com/xitu/gold-miner) > * 本文永久链接:[https://github.com/xitu/gold-miner/blob/master/TODO1/understanding-asynchronous-javascript-the-event-loop.md](https://github.com/xitu/gold-miner/blob/master/TODO1/understanding-asynchronous-javascript-the-event-loop.md) -> * 译者: -> * 校对者: +> * 译者:[H246802](https://github.com/H246802) +> * 校对者:[ElizurHz](https://github.com/ElizurHz), [Yangfan2016](https://github.com/Yangfan2016) -# Understanding Asynchronous JavaScript +# 理解异步 JavaScript -Learn How JavaScript Works +学习 JavaScript 是怎么工作的 ![](https://cdn-images-1.medium.com/max/2000/0*wO-kYdN93deiT0U9) -Photo by [Sean Lim](https://unsplash.com/@sean1188?utm_source=medium&utm_medium=referral) on [Unsplash](https://unsplash.com?utm_source=medium&utm_medium=referral) +照片来自 [Unsplash](https://unsplash.com?utm_source=medium&utm_medium=referral) 的作者 [Sean Lim](https://unsplash.com/@sean1188?utm_source=medium&utm_medium=referral) -JavaScript is a single-threaded programming language which means only one thing can happen at a time. That is, the JavaScript engine can only process one statement at a time in a single thread. +JavaScript 是一种单线程编程语言,这意味着同一时间只能完成一件事情。也就是说,JavaScript 引擎只能在单一线程中处理一次语句。 -While the single-threaded languages simplify writing code because you don’t have to worry about the concurrency issues, this also means you can’t perform long operations such as network access without blocking the main thread. +单线程语言简化了代码编写,因为你不必担心并发问题,但这也意味着你无法在不阻塞主线程的情况下执行网络请求等长时间操作。 -Imagine requesting some data from an API. Depending upon the situation the server might take some time to process the request while blocking the main thread making the web page unresponsive. +想象一下从 API 中请求一些数据。根据情况,服务器可能需要一些时间来处理请求,同时阻塞主线程,让网页无法响应。 -That’s where asynchronous JavaScript comes into play. Using asynchronous JavaScript (such as callbacks, promises, and async/await), you can perform long network requests without blocking the main thread. +这也就是异步 JavaScript 的美妙之处了。使用异步 JavaScript(例如回调,Promise 或者 async/await),你可以执行长时间网络请求同时不会阻塞主线程。 -While it’s not necessary that you learn all these concepts to be an awesome JavaScript developer, it’s helpful to know :) +虽然您没有必要将所有这些概念都学会成为一名出色的 JavaScript 开发人员,但了解这些对你会很有帮助 :) -So without further ado, Let’s get started :) +所以不用多说了,让我们开始吧! -**Tip**: Using [**Bit**](https://github.com/teambit/bit) you can turn any JS code into an API you can share, use and sync across projects and apps to build faster and reuse more code. Give it a try. +### 同步 JavaScript 如何工作? -- [**Bit - Share and build with code components**: Bit helps you share, discover and use code components between projects and applications to build new features and...](https://bitsrc.io "https://bitsrc.io") +在深入研究异步 JavaScript 之前,让我们首先了解同步 JavaScript 代码在 JavaScript 引擎中的执行情况。例如: -* * * - -### How Does Synchronous JavaScript Work? - -Before we dive into asynchronous JavaScript, let’s first understand how the synchronous JavaScript code executes inside the JavaScript engine. For example: - -``` +```JavaScript const second = () => { console.log('Hello there!'); } - const first = () => { console.log('Hi there!'); second(); console.log('The End'); } - first(); ``` -To understand how the above code executes inside the JavaScript engine, we have to understand the concept of the execution context and the call stack (also known as execution stack). +要理解上述代码在 JavaScript 引擎中的执行方式,我们必须理解执行上下文和调用栈(也称为执行栈)的概念。 -#### Execution Context +#### 执行上下文 -An Execution Context is an abstract concept of an environment where the JavaScript code is evaluated and executed. Whenever any code is run in JavaScript, it’s run inside an execution context. +执行上下文是评估和执行 JavaScript 代码的环境的抽象概念。每当在 JavaScript 中运行任何代码时,它都在执行上下文中运行。 -The function code executes inside the function execution context, and the global code executes inside the global execution context. Each function has its own execution context. +函数代码在函数执行上下文中执行,全局代码在全局执行上下文中执行。每个函数都有自己的执行上下文。 -#### Call Stack +#### 调用栈 -The call stack as its name implies is a stack with a LIFO (Last in, First out) structure, which is used to store all the execution context created during the code execution. +顾名思义,调用栈是一个具有 LIFO(后进先出)结构的栈,用于存储代码执行期间创建的所有执行上下文。 -JavaScript has a single call stack because it’s a single-threaded programming language. The call stack has a LIFO structure which means that the items can be added or removed from the top of the stack only. +JavaScript 有一个单独的调用栈,因为它是一种单线程编程语言。调用栈具有 LIFO 结构,这意味着只能从调用栈顶部添加或删除元素。 -Let’s get back to the above code snippet and try to understand how the code executes inside the JavaScript engine. +让我们回到上面的代码片段以便尝试理解代码在 JavaScript 引擎中的执行方式。 -``` +```JavaScript const second = () => { console.log('Hello there!'); } - const first = () => { console.log('Hi there!'); second(); console.log('The End'); } - first(); ``` -![](https://cdn-images-1.medium.com/max/1000/1*DkG1a8f7rdl0GxM0ly4P7w.png) +![image](https://cdn-images-1.medium.com/max/1240/1*DkG1a8f7rdl0GxM0ly4P7w.png) -Call Stack for the above code +

上述代码的调用栈工作情况

-#### So What’s Happening Here? +#### 这过程发生了什么呢? -When this code is executed, a global execution context is created (represented by `main()`) and pushed to the top of the call stack. When a call to `first()` is encountered, it’s pushed to the top of the stack. +当代码执行的时候,会创建一个全局执行上下文(由 `main()` 表示)并将其推到执行栈的顶部。当对 `first()` 函数调用时,它会被推送的栈的顶部。 -Next, `console.log('Hi there!')` is pushed to the top of the stack, when it finishes, it’s popped off from the stack. After it, we call `second()`, so the `second()` function is pushed to the top of the stack. +接下来,`console.log('Hi there!')` 被推到调用栈的顶部,当它执行完成后,它会从调用栈中弹出。在它之后,我们调用 `second()`,因此 `second()` 函数被推送到调用栈的顶部。 -`console.log('Hello there!')` is pushed to the top of the stack and popped off the stack when it finishes. The `second()` function finishes, so it’s popped off the stack. +`console.log('Hello there!')` 被推到调用栈顶部并在完成后从调用栈中弹出。`second()` 函数执行完成,接着它从调用栈中弹出。 -`console.log(‘The End’)` is pushed to the top of the stack and removed when it finishes. After it, the `first()` function completes, so it’s removed from the stack. +`console.log('The End')` 被推到调用栈顶部并在完成后被删除。之后,`first()` 函数执行完成,因此它从调用栈中删除。 -The program completes its execution at this point, so the global execution context(`main()`) is popped off from the stack. +程序此时完成其执行,因此从调用栈中弹出全局执行上下文(`main()`)。 -### How Does Asynchronous JavaScript Work? +### 异步 JavaScript 如何工作? -Now that we have a basic idea about the call stack, and how the synchronous JavaScript works, let’s get back to the asynchronous JavaScript. +现在我们已经了解了相关调用栈的基本概念,以及同步 JavaScript 的工作原理,现在让我们回到异步 JavaScript。 -#### What is Blocking? +#### 什么是阻塞? -Let’s suppose we are doing an image processing or a network request in a synchronous way. For example: +假设我们正在以同步方式进行图像处理或网络请求。例如: -``` +```JavaScript const processImage = (image) => { /** - * doing some operations on image + * 对图像进行一些操作 **/ console.log('Image processed'); } - const networkRequest = (url) => { /** - * requesting network resource + * 请求网络资源 **/ return someData; } - const greeting = () => { console.log('Hello World'); } - processImage(logo.jpg); networkRequest('www.somerandomurl.com'); greeting(); ``` -Doing image processing and network request takes time. So when `processImage()` function is called, it’s going to take some time depending on the size of the image. +进行图像处理和网络请求都需要时间。因此,当 `processImage()` 函数调用时需要一些时间,具体多少时间根据图像的大小决定。 -When the `processImage()` function completes, it’s removed from the stack. After that the `networkRequest()` function is called and pushed to the stack. Again it’s also going to take some time to finish execution. +当 `processImage()` 函数完成时,它将从调用栈中删除。之后调用 `networkRequest()` 函数并将其推送到执行栈。同样,它还需要一些时间才能完成执行。 -At last when the `networkRequest()` function completes, `greeting()` function is called and since it contains only a `console.log` statement and `console.log` statements are generally fast, so the `greeting()` function is immediately executed and returned. +最后,当 `networkRequest()` 函数完成时,调用 `greeting()` 函数,因为它只包含 `console.log` 语句,而 `console.log` 语句通常很快,所以 `greeting()` 函数会立即执行并返回。 -So you see, we have to wait until the function (such as `processImage()` or `networkRequest()`) has finished. This means these functions are blocking the call stack or main thread. So we can’t perform any other operation while the above code is executing which is not ideal. +所以你可以看到,我们必须等到函数(例如 `processImage()` 或 `networkRequest()`)完成。这也就意味着这些函数阻塞了调用栈或主线程。因此,在执行上述代码时,我们无法执行任何其他操作,这是不理想的。 -#### So what’s the solution? +#### 那么解决方案是什么? -The simplest solution is asynchronous callbacks. We use asynchronous callbacks to make our code non-blocking. For example: +最简单的解决办法是异步回调,我们通常使用异步回调来让代码无阻塞。例如: -``` +```JavaScript const networkRequest = () => { setTimeout(() => { console.log('Async Code'); }, 2000); }; - console.log('Hello World'); - networkRequest(); ``` -Here I have used `setTimeout` method to simulate the network request. Please keep in mind that the `setTimeout` is not a part of the JavaScript engine, it’s a part of something known as web APIs (in browsers) and C/C++ APIs (in node.js). +这里我使用了 `setTimeout` 方法来模拟网络请求。请记住,`setTimeout` 不是 JavaScript 引擎的一部分,它是 Web APIs(在浏览器中)和 C/C++ APIs(在 node.js 中)的一部分。 -To understand how this code is executed we have to understand a few more concepts such event loop and the callback queue (also known as task queue or the message queue). +要了解如何执行此代码,我们必须了解一些其他概念,例如事件循环和回调队列(也称为任务队列或消息队列)。 -![](https://cdn-images-1.medium.com/max/800/1*O_H6XRaDX9FaC4Q9viiRAA.png) +![image](https://cdn-images-1.medium.com/max/992/1*O_H6XRaDX9FaC4Q9viiRAA.png) -An Overview of JavaScript Runtime Environment +

JavaScript 运行时环境概述

-The **event loop**, the **web APIs** and the **message queue**/**task queue** are not part of the JavaScript engine, it’s a part of browser’s JavaScript runtime environment or Nodejs JavaScript runtime environment (in case of Nodejs). In Nodejs, the web APIs are replaced by the C/C++ APIs. +**事件循环**,**Web APIs** 和 **消息队列/任务队列** 不是 JavaScript 引擎的一部分,它是浏览器的 JavaScript 运行所处环境或 Nodejs JavaScript 运行所处环境中的一部分(在 Nodejs 的环境下)。在 Nodejs 中,Web APIs 被 C/C++ APIs 取代。 -Now let’s get back to the above code and see how it’s executed in an asynchronous way. +现在让我们回过头看看上面的代码,看看它是如何以异步方式执行的。 -``` +```JavaScript const networkRequest = () => { setTimeout(() => { console.log('Async Code'); }, 2000); }; - console.log('Hello World'); - networkRequest(); - console.log('The End'); ``` -![](https://cdn-images-1.medium.com/max/800/1*sOz5cj-_Jjv23njWg_-uGA.gif) - -Event Loop +![image](https://cdn-images-1.medium.com/max/992/1*sOz5cj-_Jjv23njWg_-uGA.gif)) -When the above code loads in the browser, the `console.log(‘Hello World’)` is pushed to the stack and popped off the stack after it’s finished. Next, a call to `networkRequest()` is encountered, so it’s pushed to the top of the stack. +

Event Loop(事件循环)

-Next `setTimeout()` function is called, so it’s pushed to the top of the stack. The `setTimeout()` has two arguments: 1) callback and 2) time in milliseconds (ms). +当上面的代码在浏览器中运行时,`console.log('Hello World')` 被推送到栈,在执行完成后从栈中弹出。紧接着,遇到 `networkRequest()` 的执行,因此将其推送到栈顶部。 -The `setTimeout()` method starts a timer of `2s` in the web APIs environment. At this point, the `setTimeout()` has finished and it’s popped off from the stack. After it, `console.log('The End')` is pushed to the stack, executed and removed from the stack after its completion. +接下来调用 `setTimeout()` 函数,因此将其推送到栈顶部。`setTimeout()` 有两个参数:1) 回调和 2) 以毫秒(ms)为单位的时间。 + +`setTimeout()` 方法在 Web APIs 环境中启动 `2s` 的计时器。此时,`setTimeout()` 已完成,并从调用栈中弹出。在它之后,`console.log('The End')` 被推送到栈,在执行完成后从调用栈中删除。 + + 同时,计时器已到期,现在回调函数被推送到**消息队列**。但回调函数并没有立即执行,而这就是形成了一个事件循环(Event Loop)。 -Meanwhile, the timer has expired, now the callback is pushed to the **message queue**. But the callback is not immediately executed, and that’s where the event loop kicks in. + #### 事件循环 -#### The Event Loop +事件循环的作用是查看调用栈并确定调用栈是否为空。如果调用栈为空,它会查看消息队列以查看是否有任何挂起的回调等待执行。 -The job of the Event loop is to look into the call stack and determine if the call stack is empty or not. If the call stack is empty, it looks into the message queue to see if there’s any pending callback waiting to be executed. +在这个例子中,消息队列包含一个回调,此时调用栈为空。因此,事件循环(Event Loop)将回调推送到调用栈顶部。 -In this case, the message queue contains one callback, and the call stack is empty at this point. So the Event loop pushes the callback to the top of the stack. +再之后,`console.log('Async Code')` 被推到栈顶部,执行并从调用栈中弹出。此时,回调函数已完成,因此将其从调用栈中删除,程序最终完成。 -After that the `console.log(‘Async Code’)` is pushed to the top of the stack, executed and popped off from the stack. At this point, the callback has finished so it’s removed from the stack and the program finally finishes. +#### DOM 事件 -#### DOM Events +**消息队列**还包含来自 DOM 事件的回调,例如点击事件和键盘事件。 -The **Message queue** also contains the callbacks from the DOM events such as click events and keyboard events. For example: +例如: -``` +```JavaScript document.querySelector('.btn').addEventListener('click',(event) => { console.log('Button Clicked'); }); ``` +在DOM事件的情况下,事件监听器位于 Web APIs 环境中等待某个事件(在这种情况下是点击事件)发生,并且当该事件发生时,则回调函数被放置在等待执行的消息队列中。 -In case of DOM events, the event listener sits in the web APIs environment waiting for a certain event (click event in this case) to happen, and when that event happens, then the callback function is placed in the message queue waiting to be executed. - -Again the event loop checks if the call stack is empty and pushes the event callback to the stack if it’s empty and the callback is executed. +事件循环再次检查调用栈是否为空,如果它为空并且执行了回调,则将事件回调推送到调用栈。 -We have learned how the asynchronous callbacks and DOM events are executed which uses the message queue to store all the callbacks waiting to be executed. +我们已经知道了如何执行异步回调和 DOM 事件,它们使用消息队列来存储等待执行的所有回调。 -#### ES6 Job Queue/ Micro-Task queue +#### ES6 工作队列/微任务队列(Job Queue/ Micro-Task queue) -ES6 introduced the concept of job queue/micro-task queue which is used by Promises in JavaScript. The difference between the message queue and the job queue is that the job queue has a higher priority than the message queue, which means that promise jobs inside the job queue/ micro-task queue will be executed before the callbacks inside the message queue. +ES6 引入了 Promises 在 JavaScript 中使用的工作队列/微任务队列的概念。消息队列和微任务队列之间的区别在于工作队列的优先级高于消息队列,这意味着 工作队列/微任务队列中的 promise 工作将在消息队列内的回调之前执行。 -For example: +例如: -``` +```JavaScript console.log('Script start'); - setTimeout(() => { console.log('setTimeout'); }, 0); - new Promise((resolve, reject) => { resolve('Promise resolved'); }).then(res => console.log(res)) .catch(err => console.log(err)); - console.log('Script End'); ``` -Output: +输出: ``` Script start @@ -243,92 +223,85 @@ Promise resolved setTimeout ``` -We can see that the promise is executed before the `setTimeout`, because promise response are stored inside the micro-task queue which has a higher priority than the message queue. +我们可以看到 promise 在 `setTimeout` 之前执行,因为 promise 响应存储在微任务队列中,其优先级高于消息队列。 -Let’s take another example, this time with two promises and two setTimeout. For example: +让我们再看一个例子,这次有两个 promise 和两个 setTimeout。例如: -``` +```JavaScript console.log('Script start'); - -setTimeout(() => { - console.log('setTimeout 1'); +setTimeout(() => { + console.log('setTimeout 1'); }, 0); - -setTimeout(() => { - console.log('setTimeout 2'); +setTimeout(() => { + console.log('setTimeout 2'); }, 0); - -new Promise((resolve, reject) => { - resolve('Promise 1 resolved'); - }).then(res => console.log(res)) +new Promise((resolve, reject) => { + resolve('Promise 1 resolved'); + }).then(res => console.log(res)) .catch(err => console.log(err)); - -new Promise((resolve, reject) => { - resolve('Promise 2 resolved'); - }).then(res => console.log(res)) +new Promise((resolve, reject) => { + resolve('Promise 2 resolved'); + }).then(res => console.log(res)) .catch(err => console.log(err)); - console.log('Script End'); ``` -This prints: +输出: ``` -Script start -Script End -Promise 1 resolved -Promise 2 resolved -setTimeout 1 +Script start +Script End +Promise 1 resolved +Promise 2 resolved +setTimeout 1 setTimeout 2 ``` -We can see that the two promises are executed before the callbacks in the `setTimeout` because the event loop prioritizes the tasks in micro-task queue over the tasks in message queue/task queue. +我们可以看到两个 promise 都在 `setTimeout` 中的回调之前执行,因为事件循环将微任务队列中的任务优先于消息队列/任务队列中的任务。 -While the event loop is executing the tasks in the micro-task queue and in that time if another promise is resolved, it will be added to the end of the same micro-task queue, and it will be executed before the callbacks inside the message queue no matter for how much time the callback is waiting to be executed. +当事件循环正在执行微任务队列中的任务时,如果另一个 promise 执行 resolve 方法,那么它将被添加到同一个微任务队列的末尾,并且它将在消息队列的所有回调之前执行,无论消息队列回调等待执行花费了多少时间。 -For example: +例如: -``` +```JavaScript console.log('Script start'); - -setTimeout(() => { - console.log('setTimeout'); +setTimeout(() => { + console.log('setTimeout'); }, 0); - -new Promise((resolve, reject) => { - resolve('Promise 1 resolved'); +new Promise((resolve, reject) => { + resolve('Promise 1 resolved'); }).then(res => console.log(res)); - -new Promise((resolve, reject) => { - resolve('Promise 2 resolved'); - }).then(res => { - console.log(res); - return new Promise((resolve, reject) => { - resolve('Promise 3 resolved'); - }) +new Promise((resolve, reject) => { + resolve('Promise 2 resolved'); + }).then(res => { + console.log(res); + return new Promise((resolve, reject) => { + resolve('Promise 3 resolved'); + }) }).then(res => console.log(res)); - console.log('Script End'); ``` -This prints: +输出: ``` -Script start -Script End -Promise 1 resolved -Promise 2 resolved -Promise 3 resolved +Script start +Script End +Promise 1 resolved +Promise 2 resolved +Promise 3 resolved setTimeout ``` -So all the tasks in micro-task queue will be executed before the tasks in message queue. That is, the event loop will first empty the micro-task queue before executing any callback in the message queue. +因此,微任务队列中的所有任务都将在消息队列中的任务之前执行。也就是说,事件循环将首先在执行消息队列中的任何回调之前清空微任务队列。 + +### 总结 -### Conclusion +因此,我们已经了解了异步 JavaScript 如何工作以及其他概念,例如调用栈,事件循环,消息队列/任务队列和工作队列/微任务队列,它们共同构成了 JavaScript 运行时环境。虽然您没有必要将所有这些概念都学习成为一名出色的 JavaScript 开发人员,但了解这些概念会很有帮助 :) -So we have learned how asynchronous JavaScript works and other concepts such as call stack, event loop, message queue/task queue and job queue/micro-task queue which together make the JavaScript runtime environment. While it’s not necessary that you learn all these concepts to be an awesome JavaScript developer, but it’s helpful to know these concepts :) +**译者注:** -That’s it and if you found this article helpful, please click the clap 👏button, you can also follow me on [Medium](https://medium.com/@Sukhjinder) and [Twitter](https://twitter.com/sukhjinder_95), and if you have any doubt, feel free to comment! I’d be happy to help :) +文中工作队列(Job Queue)也就是微任务队列,而消息队列则是指我们通常聊得宏任务队列。 > 如果发现译文存在错误或其他需要改进的地方,欢迎到 [掘金翻译计划](https://github.com/xitu/gold-miner) 对译文进行修改并 PR,也可获得相应奖励积分。文章开头的 **本文永久链接** 即为本文在 GitHub 上的 MarkDown 链接。 From 1eb3bf38339b8917433e2e353603c465ef8ae0d3 Mon Sep 17 00:00:00 2001 From: snpmyn <30993415+snpmyn@users.noreply.github.com> Date: Fri, 4 Jan 2019 21:38:17 +0800 Subject: [PATCH 21/54] =?UTF-8?q?=E6=A0=BC=E5=AD=90=E6=8B=BC=E8=B4=B4=20?= =?UTF-8?q?=E2=80=94=20=E5=85=B3=E4=BA=8E=E6=A8=A1=E5=9D=97=E5=8C=96?= =?UTF-8?q?=E7=9A=84=E6=95=85=E4=BA=8B=20(#4932)?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * Update a-patchwork-plaid-monolith-to-modularized-app.md * Update a-patchwork-plaid-monolith-to-modularized-app.md * Update a-patchwork-plaid-monolith-to-modularized-app.md * Update a-patchwork-plaid-monolith-to-modularized-app.md * Update a-patchwork-plaid-monolith-to-modularized-app.md * Update a-patchwork-plaid-monolith-to-modularized-app.md * Update a-patchwork-plaid-monolith-to-modularized-app.md * Update a-patchwork-plaid-monolith-to-modularized-app.md * Update a-patchwork-plaid-monolith-to-modularized-app.md * Update a-patchwork-plaid-monolith-to-modularized-app.md * Update a-patchwork-plaid-monolith-to-modularized-app.md * Update a-patchwork-plaid-monolith-to-modularized-app.md * Update a-patchwork-plaid-monolith-to-modularized-app.md * Update a-patchwork-plaid-monolith-to-modularized-app.md * Update a-patchwork-plaid-monolith-to-modularized-app.md * Update a-patchwork-plaid-monolith-to-modularized-app.md * Update a-patchwork-plaid-monolith-to-modularized-app.md * Update a-patchwork-plaid-monolith-to-modularized-app.md --- ...hwork-plaid-monolith-to-modularized-app.md | 267 +++++++++--------- 1 file changed, 133 insertions(+), 134 deletions(-) diff --git a/TODO1/a-patchwork-plaid-monolith-to-modularized-app.md b/TODO1/a-patchwork-plaid-monolith-to-modularized-app.md index 78a70c9a4cf..ecc68b3d801 100644 --- a/TODO1/a-patchwork-plaid-monolith-to-modularized-app.md +++ b/TODO1/a-patchwork-plaid-monolith-to-modularized-app.md @@ -2,119 +2,118 @@ > * 原文作者:[Ben Weiss](https://medium.com/@keyboardsurfer?source=post_header_lockup) > * 译文出自:[掘金翻译计划](https://github.com/xitu/gold-miner) > * 本文永久链接:[https://github.com/xitu/gold-miner/blob/master/TODO1/a-patchwork-plaid-monolith-to-modularized-app.md](https://github.com/xitu/gold-miner/blob/master/TODO1/a-patchwork-plaid-monolith-to-modularized-app.md) -> * 译者: -> * 校对者: +> * 译者:[snpmyn](https://github.com/snpmyn) -# Patchwork Plaid — A modularization story +# 格子拼贴 — 关于模块化的故事 ![](https://cdn-images-1.medium.com/max/800/0*7f6VI2TLc-P5iokR) -Illustrated by [Virginia Poltrack](https://twitter.com/VPoltrack) +插图来自 [Virginia Poltrack](https://twitter.com/VPoltrack) -#### _How and why we modularized Plaid and what’s to come_ +#### 我们为什么以及如何进行模块化,模块化后会发生什么? -_This article dives deeper into the modularization portion of_ [_Restitching Plaid_](https://medium.com/@crafty/restitching-plaid-9ca5588d3b0a)_._ +这篇文章深入探讨了 [Restitching Plaid](https://medium.com/@crafty/restitching-plaid-9ca5588d3b0a) 模块化部分。 -In this post I’ll cover how we refactored Plaid away from a monolithic universal application to a modularized app bundle. These are some of the benefits we achieved: +在这篇文章中,我将全面介绍如何将一个整体的、庞大的、普通的应用转化为一个模块化应用束。以下是我们已取得的成果: -* more than 60% reduction in install size -* greatly increased code hygiene -* potential for dynamic delivery, shipping code on demand +* 整体体积减少超过 60% +* 极大地增强代码健壮性 +* 支持动态交付、按需打包代码 -During all of this we did not make changes to the user experience. +我们做的所有事情,都不会影响用户体验。 -### A first glance at Plaid +### Plaid 初印象 ![](https://cdn-images-1.medium.com/max/800/1*vVUYtBjOkcvcX13SsMdqnA.gif) -Navigating Plaid +导航 Plaid -Plaid is an application with a delightful UI. Its home screen displays a stream of news items from several sources. -News items can be accessed in more detail, leading to separate screens. -The app also contains “search” functionality and an “about” screen. Based on these existing features we selected several for modularization. +Plaid 是一个具有令人感到愉悦的 UI 的应用。它的主屏幕显示的新闻来自多个来源。 +这些新闻被点击后展示详情,从而出现分屏效果。 +该应用同时具有搜索功能和一个关于模块。基于这些已经存在的特征,我们选择一些进行模块化。 -The news sources, (Designer News and Dribbble), became their own dynamic feature module. The `about` and `search` features also were modularized into dynamic features. +新闻来源(Designer News 和 Dribbble)成为了它自己拥有的动态功能模块。关于和搜索特征同样被模块化为动态功能。 -[Dynamic features](https://developer.android.com/studio/projects/dynamic-delivery) allow code to be shipped without directly including it in the base apk. In consecutive steps this enables feature downloads on demand. +[动态功能](https://developer.android.com/studio/projects/dynamic-delivery)允许在不直接于基础应用包含代码情况下提供代码。正因为此,通过连续步骤可实现按需下载功能。 -### What’s in the box — Plaid’s construction +### 接下来介绍 Plaid 结构 -Like most Android apps, Plaid started out as a single monolithic module built as a universal apk. The install size was just under 7 MB. Much of this data however was never actually used at runtime. +如许多安卓应用一样,Plaid 最初是作为普通应用构建的单一模块。它的安装体积仅 7MB 一下。然而许多数据并未在运行时用到。 -#### Code structure +#### 代码结构 -From a code point of view Plaid had clear boundary definitions through packages. But as it happens with a lot of codebases these boundaries were sometimes crossed and dependencies snuck in. Modularization forces us to be much stricter with these boundaries, improving the separation. +从代码角度来看,Plaid 基于包从而有明确边界定义。但随大量代码库的出现,这些边界会被跨越且依赖会潜入其中。模块化要求我们更加严格地限定这些边界,从而提高和改善代码分离。 -#### Native libraries +#### 本地库 -The biggest chunk of unused data originates in [Bypass](https://github.com/Uncodin/bypass), a library we use to render markdown in Plaid. It includes native libraries for multiple CPU architectures which all end up in the universal apk taking up around 4MB. App bundles enable delivering only the library needed for the device architecture, reducing the required size to around 1MB. +最大未用到的数据块来自 [Bypass](https://github.com/Uncodin/bypass),一个我们用来在 Plaid 呈现标记的库。它包括用于多核 CPU 体系架构的本地库,这些本地库最终在普通应用占大约 4MB 左右。应用束允许仅交付设备架构所需的库,将所需体积减少1MB左右。 -#### Drawable resources +#### 可提取资源 -Many apps use rasterized assets. These are density dependent and commonly account for a huge chunk of an app’s file size. Apps can massively benefit from configuration apks, where each display density is put in a separate apk, allowing for a device tailored installation, also drastically reducing download and size. +许多应用使用栅格化资产。它们与密度有关且通常占应用文件体积很大一部分。应用可从配置应用中受益匪浅,配置应用中每个显示密度都被放在一个独立应用中,允许设备定制安装,也大大减少下载和体积。 -Plaid relies heavily on [vector drawables](https://developer.android.com/guide/topics/graphics/vector-drawable-resources) to display graphical assets. Since these are density agnostic and save a lot of file size already the data savings here were not too impactful for us. +Plaid 显示图形资源时,很大程度依赖于 [vector drawables](https://developer.android.com/guide/topics/graphics/vector-drawable-resources)。因这些与密度无关且已保存许多文件,故此处数据节省对我们并非太有影响。 -### Stitching everything together +### 拼贴起来 -During the modularization task, we initially replaced `./gradlew assemble` with `./gradlew bundle`. Instead of producing an Android PacKage (apk), Gradle would now produce an [Android App Bundle](http://g.co/androidappbundle) (aab). An Android App Bundle is required for using the dynamic-feature Gradle plugin, which we’ll cover later on. +在模块化中,我们最初把 `./gradlew assemble` 替换为 `./gradlew bundle`。Gradle 现在将生成一个 [Android App Bundle](http://g.co/androidappbundle)(aab),替换生成应用。一个安卓应用束需用到动态功能 Gradle 插件,我们稍后介绍。 -#### Android App Bundles +#### 安卓应用束 -Instead of a single apk, AABs generate a number of smaller configuration apks. These apks can then be tailored to the user’s device, saving data during delivery and on disk. App bundles are also a prerequisite for dynamic feature modules. +相对单个应用,安卓应用束生成许多小的配置应用。这些应用可根据用户设备定制,从而在发送过程和磁盘上保存数据。应用束也是动态功能模块先决条件。 -Configuration apks are generated by Google Play after the Android App Bundle is uploaded. With [app bundles](http://g.co/androidappbundle) being an [open spec](https://developer.android.com/guide/app-bundle#aab_format) and Open Source [tooling available](https://github.com/google/bundletool), other app stores can implement this delivery mechanism too. In order for the Google Play Store to generate and sign the apks the app also has to be enrolled to [App Signing by Google Play](https://developer.android.com/studio/publish/app-signing). +在 Google Play 上传应用束后,可生成配置应用。随着[应用束](http://g.co/androidappbundle)成为[开放规范](https://developer.android.com/guide/app-bundle#aab_format),其它应用商店也可实现该交付机制。为 Google Play 生成并签署应用,应用必须注册到[由 Google Play 签名的应用程序](https://developer.android.com/studio/publish/app-signing)。 -#### Benefits +#### 优势 -What did this change of packaging do for us? +这种封装改变给我们带来了什么? -**Plaid is now is now more than 60 % smaller on device, which equals about 4 MB of data.** +**Plaid 现在设备减少 60% 以上体积,等同大约 4MB 数据。** -This means that each user has some more space for other apps. -Also download time has improved due to decreased file size. +这意味每一位用户都能为其它应用预留更多空间。 +同时下载时间也因文件大小缩小而改善。 ![](https://i.loli.net/2018/12/17/5c179ef2e5c9c.png) -Not a single line of code had to be touched to achieve this drastic improvement. +无需修改任何一行代码即可实现这一大幅度改进。 -### Approaching modularization +### 实现模块化 -The overall approach we chose for modularizing is this: +我们为实现模块化所选的方法: -1. Move all code and resources into a core module. -2. Identify modularizable features. -3. Move related code and resources into feature modules. +1. 将所有代码和资源块移动到核心模块中。 +2. 识别可模块化功能。 +3. 将相关代码和资源移动到功能模块中。 ![](https://cdn-images-1.medium.com/max/800/1*3OniQxsZEShiTnQLyuBwtQ.png) -green: dynamic features | dark grey: application module | light grey: libraries +绿色:动态功能 | 深灰色:应用模块 | 浅灰色:库 -The above graph shows the current state of Plaid’s modularization: +上面图表向我们展示了 Plaid 模块化现状: -* `:bypass` and external `shared dependencies` are included in core -* `:app` depends on `:core` -* dynamic feature modules depend on `:app` +* `旁路模块` 和外部 `分享依赖` 包含在核心模块当中 +* `应用` 依赖于 `核心模块` +* 动态功能模块依赖于 `应用` -#### Application module +#### 应用模块 -The `:app` module basically is the already existing `[com.android.application](https://developer.android.com/studio/build/)`, which is needed to create our app bundle and keep shipping Plaid to our users. Most code used to run Plaid doesn’t have to be in this module and can be moved elsewhere. +`应用` 模块基本上是现存的[应用](https://developer.android.com/studio/build/),被用来创建应用束且向我们展示 Plaid。许多用来运行 Plaid 的代码没必要必须包含在该模块中,而是可移至其它任何地方。 -#### Plaid’s `core module` +#### Plaid 的 `核心模块` -To get started with our refactoring, we moved all code and resources into a `[com.android.library](https://developer.android.com/studio/projects/android-library)` module. After further refactoring, our `:core` module only contains code and resources which are shared between feature modules. This allows for a much cleaner separation of dependencies. +为开始重构,我们将所有代码和资源都移动至一个 [com.android.library](https://developer.android.com/studio/projects/android-library) 模块。进一步重构后,我们的`核心模块`仅包含各个功能模块间共享所需要代码和资源。这将使得更加清晰地分离依赖项。 -#### External dependencies +#### 外部库 -A forked third party dependency is included in core via the `:bypass` module. Additionally, all other gradle dependencies were moved from `:app` to `:core`, using gradle’s `api` dependency keyword. +通过`旁路模块`将一个第三方依赖库包含在核心模块中。此外通过 gradle `api` 依赖关键字,将所有其它 gradle 依赖从 `应用` 移动至 `核心模块`。 -_Gradle dependency declaration: api vs implementation_ +Gradle 依赖声明:api vs implementation_ -By utilizing `api` instead of `implementation` dependencies can be shared transitively throughout the app. This decreases file size of each feature module, since the dependency only has to be included in a single module, in our case `:core`. Also it makes our dependencies more maintainable, since they are declared in a single file instead of spreading them across multiple `build.gradle` files. +通过 `api` 代替 `implementation` 可在整个程序中共享依赖项。这将减少每一个功能模块体积大小,因本例 `核心模块` 中依赖项仅需包含在单一模块中。此外还使我们的依赖关系更加易于维护,因为它们被声明在一个单一文件而非在多个 `build.gradle` 文件间传播。 -#### Dynamic feature modules +#### 动态功能模块 -Above I mentioned the features we identified that can be refactored into `[com.android.dynamic-feature](https://developer.android.com/studio/projects/dynamic-delivery)` modules. These are: +上面我提到了我们识别的可被重构为 [com.android.dynamic-feature](https://developer.android.com/studio/projects/dynamic-delivery) 的模块。它们是: ``` :about @@ -123,85 +122,85 @@ Above I mentioned the features we identified that can be refactored into `[com.a :search ``` -#### _Introducing com.android.dynamic-feature_ +#### 动态功能介绍 -A dynamic feature module is essentially a gradle module which can be downloaded independently from the base application module. It can hold code and resources and include dependencies, just like any other gradle module. While we’re not yet making use of dynamic delivery in Plaid we hope to in the future to further shrink the initial download size. +一个动态功能模块本质上是一个 gradle 模块,可从基础应用模块被独立下载。它包含代码、资源、依赖,就如同其它 gradle 模块一样。虽然我们还没在 Plaid 中使用动态交付,但我们希望将来可减少最初下载体积。 -### The great feature shuffle +### 伟大的功能改革 -After moving everything to `:core`, we flagged the “about” screen to be the feature with the least inter-dependencies, so we refactored it into a new `:about` module. This includes Activities, Views, code which is only used by this one feature. Also resources such as drawables, strings and transitions were moved to the new module. +将所有东西都移动至核心模块后,我们将“关于”页面标记为具有最少依赖项的功能,故我们将其重构为一个新的 `关于` 模块。这包括 Activties、Views、代码仅用于该功能的内容。同样,我们把所有资源例如 drawables、strings 和动画移动至一个新模块。 -We repeated these steps for each feature module, sometimes requiring dependencies to be broken up. +我们对每个功能模块进行重复操作,有时需要分解依赖项。 -In the end, `:core` contained mostly shared code and the home feed functionality. Since the home feed is only displayed within the application module, we moved related code and resources back to `:app`. +最后,核心模块包含大部分共享代码和主要功能。由于主要功能仅显示于应用模块中,我们把相关代码和资源移回 `应用`。 -#### A closer look at the feature structure +#### 功能结构剖析 -Compiled code can be structured in packages. Moving code into feature aligned packages is highly recommended before breaking it up into different compilation units. Luckily we didn’t have to restructure since Plaid already was well feature aligned. +编译后代码可在包中进行结构优化。强烈建议在将代码分解成不同编译单元前,将代码移动至与功能对应包中。幸运的是我们不用必须重构,因为 Plaid 已很好地对应了功能。 ![](https://cdn-images-1.medium.com/max/800/1*kE8K32z6aVssAmdboGuloA.png) -feature and core modules with their respective architectural layers +功能和核心模块以及各自体系结构层级 -As I mentioned, much of the functionality of Plaid is provided through news sources. Each of these consists of remote and local **data** source, **domain** and **UI** layers. +正如我提到的,Plaid 许多功能都通过新闻源提供。它们由远程和本地 **data** 资源、**domain**、**UI** 这些层级组成。 -Data sources are displayed in both the home feed and, in detail screens, within the feature module itself. The domain layer was unified in a single package. This had to be broken in two pieces: a part which can be shared throughout the app and another one that is only used within a feature. +数据源不但显示在主要功能提示中,也显示在与对应功能模块本身相关详情页中。域名层级在一个单一包中唯一。它必须分为两部分:一部分在应用中共享,另一部分仅用在一个功能模块中。 -Reusable parts were kept inside of the `:core` library, everything else went to their respective feature modules. The data layer and most of the domain layer is shared with at least one other module and were kept in core as well. +可复用部分被保存在核心模块,其它所有内容都在各自功能模块。数据层和大部分域名层至少与其它一个模块共享,并且同时也保存在核心模块。 -#### Package changes +#### 包变化 -We also made changes to package names to reflect the new module structure. -Code only relevant only to the `:dribbble` feature was moved from `io.plaidapp` to `io.plaidapp.dribbble`. The same was applied for each feature within their respective new module names. +我们还对包名进行了优化,从而反映新的模块化结构体系。 +仅与 `:dribbble` 相关代码从 `io.plaidapp` 移动至 `io.plaidapp.dribbble`。通过各自新的模块名称,这同样运用于每一个功能。 -This means that many imports had to be changed. +这意味着许多导包必须改变。 -Modularizing resources caused some issues as we had to use the fully qualified name to disambiguate the generated `R` class. For example, importing a feature local layout’s views results in a call to `R.id.library_image` while using a drawable from `:core` in the same file resulted in calls to +对资源进行模块化会产生一些问题,因为我们必须使用限定名称消除生成的 `R` 类歧义。例如,导入本地布局视图会导致调用 `R.id.library_image`,而在核心模块相同文件中使用一个 drawable 会导致 ``` io.plaidapp.core.R.drawable.avatar_placeholder ``` -We mitigated this using Kotlin’s import aliasing feature allowing us to import core’s `R` file like this: +我们使用 Kotlin 导入别名特性减轻了这一点,它允许我们如下导入核心 `R` 文件: ``` import io.plaidapp.core.R as coreR ``` -That allowed to shorten the call site to +允许将呼叫站点缩短为 ``` coreR.drawable.avatar_placeholder ``` -This makes reading the code much more concise and resilient than having to go through the full package name every time. +相较于每次都必须查看完整包名,这使得阅读代码变得简洁和灵活得多。 -#### Preparing the resource move +#### 资源移动准备 -Resources, unlike code, don’t have a package structure. This makes it trickier to align them by feature. But by following some conventions in your code, this is not impossible either. +资源不同于代码,没有一个包结构。这使得通过功能划分它们变得异常困难。但是通过在你的代码中遵循一些约定,也未尝不可能。 -Within Plaid, files are prefixed to reflect where they are being used. For example, resources which are only used in `:dribbble` are prefixed with `dribbble_`. +通过 Plaid,文件在被用到的地方作为前缀。例如,资源仅用于以 `dribbble_` 为前缀的 `:dribbble`。 -Further, files that contain resources for multiple modules, such as styles.xml are structurally grouped by module and each of the attributes prefixed as well. +将来,一些包含多个模块资源的文件,例如 styles.xml 将在模块基础上进行结构化分组,并且每一个属性同时也作为前缀。 -To give an example: Within a monolithic app, `strings.xml` holds most strings used throughout. -In a modularized app, each feature module holds on to its own strings. -It’s easier to break up the file when the strings are grouped by feature before modularizing. +举个例子:在单块应用中,`strings.xml` 包含了整体所用大部分字符串。 +在一个模块化应用内中,每一个功能模块仅包含对应模块本身字符串资源。 +字符串在模块化前进行分组将更容易拆分文件。 -Adhering to a convention like this makes moving the resources to the right place faster and easier. It also helps to avoid compile errors and runtime crashes. +像这样遵循约定,可以更快地、更容易地将资源转移至正确地方。这同样也有助于避免编译错误和运行时序错误。 -### Challenges along the way +### 过程挑战 -To make a major refactoring task like this more manageable it’s important to have good communication within the team. Communicating planned changes and making them step by step helped us to keep merge conflicts and blocking changes to a minimum. +同团队良好沟通,对使得一个重要的重构任务像这样易于管理而言,十分重要。传递计划变更并逐步实现这些变更将帮助我们合并冲突,并且将阻塞降到最低。 -#### Good intentions +#### 善意提醒 -The dependency graph from earlier in this post shows, that dynamic feature modules know about the app module. The app module on the other hand can’t easily access code from dynamic feature modules. But they contain code which has to be executed at some point. +本文前面依赖关系图表显示,动态功能模块了解应用模块。另一方面,应用模块不能轻易地从动态功能模块访问代码。但他们包含必须在某一时间执行的代码。 -Without the app knowing enough about feature modules to access their code, there is no way to launch activities via their class name in the `Intent(ACTION_VIEW, ActivityName::class.java)` way. -There are multiple other ways to launch activities though. We decided to explicitly specify the component name. +应用对功能模块没足够了解时访问代码,这将没办法在 `Intent(ACTION_VIEW, ActivityName::class.java)` 方法中通过它们的类名启动活动。 +有多种方式启动活动。我们决定显示地指定组件名。 -To do this we created an `AddressableActivity` interface within core. +为实现它,我们在核心模块开发了 `AddressableActivity` 接口。 ``` /** @@ -215,7 +214,7 @@ interface AddressableActivity { } ``` -Using this approach, we created a function that unifies activity launch intent creation: +使用这种方式,我们创建了一个函数来统一活动启动意图创建: ``` /** @@ -228,27 +227,27 @@ fun intentTo(addressableActivity: AddressableActivity): Intent { } ``` -In its simplest implementation an `AddressableActivity` only needs an explicit class name as a String. Throughout Plaid, each `Activity` is launched through this mechanism. Some contain intent extras which also have to be passed through to the activity from various components of the app. +最简单实现 `AddressableActivity` 方式为仅需一个显示类名作为一个字符串。通过 Plaid,每一个 `活动` 都通过该机制启动。对一些包含意图附加部分,必须通过应用各个组件传递到活动中。 -You can see how we did this in the whole file here: +如下文件查看我们的实现过程: - [**AddressableActivity.kt**: Helpers to start activities in a modularized world._github.com](https://github.com/nickbutcher/plaid/blob/master/core/src/main/java/io/plaidapp/core/util/ActivityHelper.kt "https://github.com/nickbutcher/plaid/blob/master/core/src/main/java/io/plaidapp/core/util/ActivityHelper.kt") -#### Styling issues +#### Styleing 问题 -Instead of a single `AndroidManifest` for the whole app, there are now separate `AndroidManifests` for each of the dynamic feature modules. -These manifests mainly contain information relevant to their component instantiation and some information concerning their delivery type, reflected by the `dist:` tag. -This means activities and services have to be declared inside the feature module that also holds the relevant code for this component. +相对于整个应用单一清单文件而言,现在对每一个动态功能模块,对清单文件进行了分离。 +这些清单文件主要包含与它们组件实例化相关的一些信息,以及通过 `dist:` 标签反应的一些与它们交付类型相关的一些信息。 +这意味着活动和服务都必须声明在包含有与组件对应的相关代码的功能模块中。 -We encountered an issue with modularizing our styles; we extracted styles only used by one feature out into their relevant module, but often they built upon `:core` styles using implicit inheritance. +我们遇到了一个将样式模块化的问题;我们仅将一个功能使用的样式提取到与该功能相关的模块中,但是它们经常是通过隐式构建在核心模块之上。 ![](https://cdn-images-1.medium.com/max/800/1*YJRNNNgg5JbRoe20l14Ffw.png) -Parts of Plaid’s style hierarchy +PLaid 样式结构部分 -These styles are used to provide corresponding activities with themes through the module’s `AndroidManifest`. +这些样式通过模块清单文件以主题形式被提供给组件活动使用。 -Once we finished moving them, we encountered compile time issues like this: +一旦我们将它们移动完毕,我们会遇到像这样编译时问题: ``` * What went wrong: @@ -260,9 +259,9 @@ error: resource style/Plaid.Translucent.About (aka io.plaidapp:style/Plaid.Trans error: failed processing manifest. ``` -The manifest merger tries to merge manifests from all the feature modules into the app’s module. That fails due to the feature module’s `styles.xml` files not being available to the app module at this point. +清单文件合并视图将所有功能模块中清单文件合并到应用模块。合并失败将导致功能模块样式文件在指定时间对应用模块不可用。 -We worked around this by creating an empty declaration for each style within `:core`’s `styles.xml` like this: +为此,我们在核心模块样式文件中为每一样式如下创建一份空声明: ``` ``` -Now the manifest merger picks up the styles during merging, even though the actual implementation of the style is being introduced through the feature module’s styles. +现在清单文件合并在合并过程中抓取样式,尽管样式的实际实现是通过功能模块样式引入。 -Another way to avoid this is to keep style declarations in the core module. But this only works if all resources referenced are in the core module as well. That’s why we decided to go with the above approach. +另一种避免如上问题做法是保持样式文件声明在核心模块。但这仅作用于所有资源引用同时也在核心模块中情况。这就是我们为何决定通过上述方式的原因。 -#### Instrumentation test of dynamic features +#### 动态功仪器测试 -Along the modularization we found that instrumentation tests currently can’t reside within the dynamic feature module but have to be included within the application module. We’ll expand on this in an upcoming blog post on our testing efforts. +通过模块化,我们发现测试工具目前不能驻留在动态功能模块中,而是必须包含在应用模块中。对此我们将在即将发布的有关测试工作博客文章中进行详细介绍。 -### What is yet to come? +### 接下来还会发生什么? -#### Dynamic code loading +#### 动态代码加载 -We make use of dynamic delivery through app bundles, but don’t yet download these after initial installation through the [Play Core Library](https://developer.android.com/guide/app-bundle/playcore). This would for example allow us to mark news sources that are not enabled by default (Product Hunt) to only be installed once the user enables this source. +我们通过应用束使用动态交付,但初次安装后不要通过 [Play Core Library](https://developer.android.com/guide/app-bundle/playcore) 下载这些文件。例如这将允许我们将默认未启用的新闻源(产品搜索)标记为仅在用户允许该新闻源后安装。 -#### Adding further news sources +#### 进一步增加新闻源 -Throughout the modularization process, we kept in mind the possibility of adding further news sources. The work to cleanly separate modules and the possibility of delivering them on demand makes this more compelling. +通过模块化过程,我们保持考虑进一步增加新闻源可能性。分离清洁模块工作以及实现按需交付可能性使得这一点更加重要。 -#### Finish modularization +#### 模块精细化 -We made a lot of progress to modularize Plaid. But there’s still work to do. Product Hunt is a news source which we haven’t put into a dynamic feature module at this point. Also some of the functionality of already extracted feature modules can be evicted from core and integrated into the respective features directly. +我们在模块化 Plaid 方面取得很大进展。但仍有工作要做。产品搜索是一个新的新闻源,现在我们并未放到动态功能模块当中。同时一些已提取的功能模块中的功能可从核心模块中移除,然后直接集成到各自功能中。 -### So, why did we decide to modularize Plaid? +### 为何我决定模块化 Plaid? -Going through this process, Plaid is now a heavily modularized app. All without making changes to the user experience. We did reap several benefits in our day to day development from this effort: +通过该过程,Plaid 现在是一个高度模块化应用。所有这些都不会改变用户体验。我们在日常开发中确实从这些努力中获得了一些益处。 -#### Install size +#### 安装体积 -Plaid is now on average more than 60 % smaller on a user’s device. -This makes installation faster and saves on precious network allowance. +PLaid 现在用户设备平均减少 60% 体积。 +这使得安装更快,并且节省宝贵网络开销。 -#### Compile time +#### 编译时间 -A clean debug build without caches now takes **32 instead of 48 seconds**.* -All the while increasing from ~50 to over 250 tasks. +一个没有缓存的调试构建现在需 **32 秒而不是 48 秒**。 +同时任务从 50 项增长到 250 项。 -This time saving is mainly due to increased parallel builds and compilation avoidance thanks to modularization. +这样的时间节省,主要是由于增加并行构建以及由于模块化而避免编译。 -Further, changes in single modules don’t require recompilation of every single module and make consecutive compilation a lot faster. +将来,单个模块变化不需对所有单个模块进行编译,并且使得连续编译速度更快。 -*For reference, these are the commits I built for [before](https://github.com/nickbutcher/plaid/commit/9ae92ab39f631a75023b38c77a5cdcaa4b2489c5) and [after](https://github.com/nickbutcher/plaid/tree/f7ab6499c0ae35ae063d7fbb155027443d458b3a) timing. +* 作为引用,这些是我构建 [before](https://github.com/nickbutcher/plaid/commit/9ae92ab39f631a75023b38c77a5cdcaa4b2489c5) 和 [after](https://github.com/nickbutcher/plaid/tree/f7ab6499c0ae35ae063d7fbb155027443d458b3a) timing 的一些提交。 -#### Maintainability +#### 可维护性 -We have detangled all sorts of dependencies throughout the process, which makes the code a lot cleaner. Also, side effects have become rarer. Each of our feature modules can be worked on separately with few interactions between them. The main benefit here is that we have to resolve a lot less merge conflicts. +我们在过程中分离可各种依赖项,这使得代码更加简洁。同时,副作用越来越小。我们的每个功能模块都可在越来越少交互下独立工作。但主要益处是我们必须解决的冲突合并越来越少。 -### In conclusion +### 结语 -We’ve made the app **more than 60% smaller**, improved on code structure and modularized Plaid into dynamic feature modules, which add potential for on demand delivery. +我们使得应用体积减少**超过 60%**,完善了代码结构并且将 PLaid 模块化成动态功能模块以及增加了按需交付潜力。 -Throughout the process we always maintained the app in a state that could be shipped to our users. You can switch your app to emit an Android App Bundle today and save install size straight away. Modularization can take some time but is a worthwhile effort (see above benefits), especially with dynamic delivery in mind. +整个过程,我们总是将应用保持在一个可随时发送给用户状态。您今天可直接切换你的应用发出一个应用束以节省安装体积。模块化需要一些时间,但鉴于上文所见好处,这是值得付出努力的,特别是考虑到动态交付。 -**Go check out** [**Plaid’s source code**](https://github.com/nickbutcher/plaid) **to see the full extent of our changes and happy modularizing!** +**去查看 [Plaid’s source code](https://github.com/nickbutcher/plaid) 了解我们所有的变化和快乐模块化过程!** > 如果发现译文存在错误或其他需要改进的地方,欢迎到 [掘金翻译计划](https://github.com/xitu/gold-miner) 对译文进行修改并 PR,也可获得相应奖励积分。文章开头的 **本文永久链接** 即为本文在 GitHub 上的 MarkDown 链接。 From cdb87a344a5f2d39cae445a8692cab750ad6210c Mon Sep 17 00:00:00 2001 From: LeviDing Date: Fri, 4 Jan 2019 21:56:16 +0800 Subject: [PATCH 22/54] Create blockchain-platforms-tech-to-watch-in-2019.md --- ...ckchain-platforms-tech-to-watch-in-2019.md | 247 ++++++++++++++++++ 1 file changed, 247 insertions(+) create mode 100644 TODO1/blockchain-platforms-tech-to-watch-in-2019.md diff --git a/TODO1/blockchain-platforms-tech-to-watch-in-2019.md b/TODO1/blockchain-platforms-tech-to-watch-in-2019.md new file mode 100644 index 00000000000..0f3c0d9c3b8 --- /dev/null +++ b/TODO1/blockchain-platforms-tech-to-watch-in-2019.md @@ -0,0 +1,247 @@ +> * 原文地址:[Blockchain Platforms & Tech to Watch in 2019](https://medium.com/the-challenge/blockchain-platforms-tech-to-watch-in-2019-f2bfefc5c23) +> * 原文作者:[Eric Elliott](https://medium.com/@_ericelliott?source=post_header_lockup) +> * 译文出自:[掘金翻译计划](https://github.com/xitu/gold-miner) +> * 本文永久链接:[https://github.com/xitu/gold-miner/blob/master/TODO1/blockchain-platforms-tech-to-watch-in-2019.md](https://github.com/xitu/gold-miner/blob/master/TODO1/blockchain-platforms-tech-to-watch-in-2019.md) +> * 译者: +> * 校对者: + +> + +# Blockchain Platforms & Tech to Watch in 2019 + +![](https://cdn-images-1.medium.com/max/2000/1*k2FyIN5xEkNzAGyflWPhUg.jpeg) + +no.thisispatrick — “Electric Water” (CC BY-NC-ND 2.0) + +**Ethereum** has been the dominant smart contract platform since 2015, but the race to build the Googles, Amazons, and Apples of smart contract platforms really started heating up in 2018 — and the stakes are high. The platforms that dominate the burgeoning internet of value could easily command trillion dollar market caps. + +> **TL;DR:** See the full list of crypto tech to pay attention to in 2019 at the bottom. + +At the end of 2018, developers are tired of waiting for scaling on the EVM to become a thing. Emerging technologies like [**Raiden Network’s**](https://raiden.network/) arrival on Ethereum mainnet bring long-anticipated hope to Ethereum developers, but it may be too little too late. This year, alternative blockchains with faster layer 1 consensus baked in are starting to attract developer attention. Where developers go, apps and users follow. + +![](https://i.loli.net/2019/01/02/5c2c256055b28.png) + +It’ll be hard to catch up, though. Ethereum has thousands of developer courses, tutorials, articles, and Stack Overflow answers, and that’s quite a head start. They also have the largest, most active community working on improvement proposals and core protocol development. + +Developers may be frustrated with slow transactions and terrible user interfaces, but Ethereum still owns the developer mindshare by a wide margin. Over 3,000 ICOs have launched on Ethereum, and its closest competitors are still in the low hundreds. Ethereum has suffered some major blows in 2018, but a big rally this week serves as an answer to challengers: _Don’t count Ethereum out just yet._ + +![](https://cdn-images-1.medium.com/max/800/1*BDxlV5cQEhxEcWBAnKSWyA.png) + +Ethereum bounces: This ain’t over yet! + +![](https://cdn-images-1.medium.com/max/800/1*aO99I_W_x4ckn_kgFlYYlg.png) + +Top 24 hour gainers, December 28, 2018 + +### Dominant Themes + +The dominant crypto theme in 2017 was the ICO big bang: The dawn of Initial Coin Offerings (ICOs). That explosion continued to expand through the first half of 2018, before regulatory concerns cast a chilling effect over the crypto industry. + +![](https://cdn-images-1.medium.com/max/800/1*FUZjNmtKuVNSAK-DnoGtoQ.png) + +Monthly ICO funding: 2014–2018 source: CoinDesk + +There were two dominant themes in 2018: + +**The BUIDL runway** — can we get our crypto projects to market before the money runs out? Sub theme: **Waste in the crypto industry.** Many companies spent outrageous money flying all over the world for conferences before they built a viable product. Spending money on marketing before you’ve built an MVP is the opposite of LEAN startup philosophy that has dominated wise tech leadership since the 2001 dot com bubble burst. + +**The crypto “winter”** — At the end of 2017, the crypto market crossed another 10x growth marker. Every time that happens, the market pulls back before climbing to another 10x multiple higher than the last. 2018 was the first year following a 10x peak, so naturally, we took another tumble. Unfortunately, many crypto projects kept the money they raised in the market during an 80%–90% slump in prices, and now the money is running out. This has led to a lot of layoffs. (See also: [“BUIDL Christmas: The Story of the Blockchain Christmas Layoffs”](https://medium.com/the-challenge/buidl-christmas-58a0c9d7377b)) + +What does that mean? The Crypto market has predictable ups and downs. Based on past performance, we know that after we hit the next 10x marker, the prices are very likely to tank 80% — 90% in the following months. What this means for treasuries is that they should plan out the project runway — traditionally at least 18 months of operating expenses, and stash that money in fiat currency to protect it from the market down cycle. That way, they can keep operating no matter what the crypto market does. If there are extra funds after putting that runway into safekeeping, sure, keep that money in the market and hope for long-term gains while you BUIDL. + +Many projects failed to do that. Those companies are forced to cut staff, and IMO, they should start with the treasurer. + +![](https://cdn-images-1.medium.com/max/800/1*2nlit12SUIYN93RdmBNoHQ.png) + +Bitcoin price (log): Each new red arrow is 10x higher than the last + +Savvy crypto investors are aware of the market cycles, and plan strategies for long term investments that they expect to stay in for 7–10+ years. To those investors, prospects for crypto investments are starting to look good again. + +**_A note about the “the crypto winter”:_** _The crypto market has never seen a winter like the one that the AI industry experienced between 1987 and 2009, which likely inspired the “crypto winter” name. During the very real AI winter, researchers used euphemisms like “machine learning”, and “analytics” to secure funding to avoid the stigma of “AI”, which many had begun to see as utopian sci-fi that would never be real. Today, advancements in AI have led to some of our most exciting technologies, including self-driving cars, self-flying drones, and major breakthroughs in robotics._ + +### What will be the themes in 2019? + +If 2017 was about ICOs, and 2018 was about survival, what will be the primary crypto themes of 2019? + +#### User Traction + +dApps had a tiny audience in 2018, but 2019 may be the year that we see the first multi-million user dApps, and non-crypto geeks will finally begin transacting in cryptocurrencies. + +According to [DappRadar](https://dappradar.com/), the most popular Ethereum dApps in 2018 currently have **less than 1,000 daily active users**. But already, a new breed of crypto apps is emerging. + +The crypto-enabled [**Brave Browser**](https://brave.com/) (led by Brendan Eich, cofounder of Mozilla, and the creator of JavaScript, the standard programming language of the web platform) has had more than 10 million installs in the Google Play store. Brave makes it easy for users to earn and spend the Basic Attention Token ([**BAT**](https://basicattentiontoken.org/)) cryptocurrency. You can earn crypto by browsing your favorite sites. If you opt in, Brave will replace the potentially dangerous tracking ads served by the ad networks with ads that won’t track your behaviors. In exchange, you’ll earn BAT automatically, just for doing what you always did. + +![](https://cdn-images-1.medium.com/max/800/1*kbd-a9fDJdFenZ8couEOFw.png) + +Screenshot: Brave browser integrated BAT wallet + +[**Sliver.tv**](https://www.sliver.tv/) is a video game streaming site which lets video game players stream their gaming sessions live for other game lovers to watch. It recently integrated the [**Theta**](https://www.thetatoken.org/) cryptocurrency, which allows viewers to earn cryptocurrency by watching streams and sharing their network bandwidth with other viewers. + +![](https://cdn-images-1.medium.com/max/800/1*81S6bI6fP7ca59GzR_qyMw.png) + +Screenshot Left: Tencent Games’ Ring of Elysium live stream on Sliver. Right: Sliver.tv’s integrated Theta wallet. + +They can also win Theta, donate it to streamers, and use it to purchase virtual and physical goods in the Sliver shop. With [more than 20k monthly active users](https://www.alexa.com/siteinfo/sliver.tv), Sliver.tv may be the most popular crypto-enabled app to date for use by a general audience (i.e., not an investment/exchange/wallet app). + +Sliver.tv is a very promising start, but it uses a centralized, custodial wallet and users can’t withdraw funds. + +[**Cent.co**](https://beta.cent.co/) is a look at the future of content-based social networks. Imagine the best of Twitter and Medium: Long form content presented in bite sized content streams that you can expand for the big picture. You can tip users who create the content, and you get rewarded when other people tip, too. Tipping is called “seeding”. When you seed content, a portion of that money goes to the original content creator, and a portion goes to everybody who seeded the content before you did. It creates a financial incentive to post high quality content, and to seed content that you think will become popular on the platform. + +![](https://cdn-images-1.medium.com/max/800/1*OuairG9NVQBNbsuhZ5gXaQ.png) + +Cent screenshot + +Cent started life as a way to offer bounties to get some work — any kind of work — done by the users of the Cent ecosystem. You can ask a question and offer a bounty for the answer. You could ask for logo design help, or ask for help editing your latest post. Anything that’s worth money to you. You control how much money you’re offering, and the number of recipients who will receive that money, so you’ll never accidentally blow your budget if your offer goes viral. The idea behind Cent was to create an economy that could allow its users to quit their day jobs and start earning money online using only their talents and the Cent platform. I’m not sure how much money people are making per hour on Cent, but what I am sure of is that it looks very promising. + +It’s also one of the most user-friendly dApps I’ve seen to date, and so far, I’m not seeing any signs that it’s being bogged down by Ethereum scaling issues. To use Cent, you’ll need a Web3 browser like [**Trust**](https://trustwallet.com/) or [**Coinbase Wallet**](https://wallet.coinbase.com/). + +I’m still anxious to see a dApp with a user-controlled wallet reach more than 10 million users. Will it happen in 2019? + +### Ethereum Challengers + +Ethereum challengers are rolling into production and community building phases in 2019. Ethereum has a huge head start, but 2019 may be the year that the competitive pressure really begins to squeeze. Ethereum challengers come primarily in two shapes: **ICO platforms** and **dApp platforms**. + +Potentially, many challengers will fill both roles, but it may help to look at them independently, anyway. + +**ICO platforms** — Almost since the day it launched, Ethereum has been the standard platform to build on if you want to launch an ICO. Smart contract applications have yet to gain any real user traction, but ICOs were a smash hit in 2017 and 2018. + +Ethereum is no longer the only choice for launching an ICO in 2019, and may not be the best choice. Competitors are starting to step up. In 2018, hundreds of cryptoassets launched on competitors. In particular, [**Waves**](https://wavesplatform.com/) recognized that launching cryptoassets was the killer app of Ethereum, and set out to make it easy. They did just that. You can issue a new token on Waves with absolutely no coding required. + +![](https://cdn-images-1.medium.com/max/800/1*_P3kFffm36qxoUWWRggCSQ.png) + +Screenshot: Waves token generation tool + +They also have a mass transfer feature that lets you easily distribute your tokens to lots of people — to conduct airdrops or distribute tokens from your ICO, for example. The hard part of conducting an ICO is exchange listing. The [**Waves wallet**](https://wavesplatform.com/product) includes an integrated Decentralized EXchange (DEX) so users can start trading your new token immediately. The Waves DEX functionality compares favorably to centralized exchanges, and easily beats the user experience in any of the competitive Ethereum-based DEXs. Unlike centralized exchanges, DEX funds are managed by user-controlled keys, so they don’t have to trust a centralized exchange with custody, or worry about what happens if an exchange gets hacked. The Android Waves wallet has been downloaded more than 100,000 times. + +Ethereum is still the most popular token launch platform by a huge margin, but Waves has managed to attract [hundreds of projects](https://icobench.com/icos?filterPlatform=Waves). [**Stellar**](https://www.stellar.org/) is another popular alternative ICO platform that’s [not far behind](https://icobench.com/icos?filterPlatform=Stellar). A few projects have launched on other alternative platforms including [NEO](https://icobench.com/icos?filterPlatform=NEO), [EOS](https://icobench.com/icos?filterPlatform=EOS), etc., but it looks like Waves and Stellar may pull away from the pack in 2019 for new token launches. + +There’s a good chance they’ll attract a lot more projects which would have otherwise launched on Ethereum in 2019. + +### dApps + +The promise of the crypto space is to build the internet of value, and you might say [decentralized applications](https://www.stateofthedapps.com/rankings) play a central role. But what exactly is a dApp? Why are they important, and which dApp platforms will reshape the game in 2019? + +**What is a dApp?** dApp is short for decentralized application, and it’s essentially the antithesis of what centralized applications are. A centralized application controls the user’s data. For example, your banking app helps you manage your bank account balance, but you technically don’t control that money — the bank does. + +If they want to [lend it to other people without asking you](https://en.wikipedia.org/wiki/Fractional-reserve_banking), they can (and do!) If they want to [freeze your accounts](https://www.sacbee.com/news/business/article217567300.html), they can. If they want to [delay your withdrawal](https://www.cleveland.com/business/index.ssf/2012/04/man_who_wants_to_withdraw_6000.html), they can. + +Facebook is another great example. If Facebook wants to [share your list of friends](https://www.fool.com/investing/2018/12/22/this-spotlight-is-plaguing-facebook-and-it-wont-se.aspx) with a 3rd party developer, they can do so without your permission. If they want to [share your private messages](https://www.newsweek.com/facebook-stock-price-fb-messenger-sharing-private-messages-netflix-spotify-1265319), they can. If they want to [shut down a feature and kill your app](https://medium.com/javascript-scene/a-new-hope-e2021fce7c7b), they can. + +Decentralized apps, on the other hand, don’t store all your user data in a centralized database. Instead, they rely on decentralized technology like blockchains and other DLTs (Distributed Ledger Technologies), [decentralized databases](https://github.com/orbitdb/orbit-db), and [decentralized file storage systems](https://ipfs.io/). dApps _can_ put you in control of your own identity, currency, and data. (They don’t all do that yet, but I suspect the ones that do will win the Web 3.0 disruption). + +dApps frequently need to transact value across the network. To do so, they usually rely on a blockchain, such as Bitcoin, Ethereum, Waves, etc. They typically need to interface with a wallet in order to authorize transactions. + +My favorite current dApps have wallets built-in, and are either custodial (meaning they manage the hard stuff like the private keys for you, e.g., [Sliver.tv](https://www.sliver.tv/)), or integrate directly with wallets (e.g., [Brave](https://brave.com/)). + +#### dApp UX + +The dApp user experience is getting better. There are now two popular browsers with integrated dApp support , so there’s no need for confusing browser extensions: [**Trust**](https://play.google.com/store/apps/details?id=com.wallet.crypto.trustapp) (recently acquired by [Binance](https://www.binance.com/)) and [**Coinbase Wallet**](https://play.google.com/store/apps/details?id=org.toshi) (which was Toshi until [Coinbase](https://www.coinbase.com) acquired it shortly after the Trust acquisition). Both have much better UX than alternatives like [Metamask](https://metamask.io/), and provide integrations with the [**Web3 API**](https://github.com/ethereum/wiki/wiki/JavaScript-API), which helps dApps integrate with the Ethereum blockchain. + +My favorite dApps use blockchains for consensus, but they connect to fast databases and load quickly, as well. My favorite dApps also don’t require user approval for every little transaction that could possibly take place on the blockchain. The key to good dApp user experience is to be selective about what you hit the blockchain for. For example, it’s possible to have a virtual account backed by a database that only needs to sync to the blockchain periodically, for settlement or security, or both. + +In the beginning of 2018, the [**Lightning Network**](https://lightning.network/) launched as a 2nd layer protocol sitting on top of the Bitcoin blockchain. In December 2019, the [**Raiden Network**](https://raiden.network/) launched an alpha on the Ethereum blockchain. Both networks provide peer to peer off-chain payments using payment channels connected by [Hashed Timelock Contracts](https://en.bitcoin.it/wiki/Hashed_Timelock_Contracts) (HTLCs). What this means for end users is that it’s now possible to transact with your dApp almost instantaneously instead of waiting for blockchain confirmations which can take up to 10 minutes. + +#### Smart Contract Platforms + +[Solidity](https://en.wikipedia.org/wiki/Solidity) has ruled the smart contract programming language ecosystem since it became available. It’s ubiquitous for smart contract programming on the Ethereum Virtual Machine (EVM). But Solidity has some serious issues, including [arithmetic overflows and underflows](https://blog.sigmaprime.io/solidity-security.html), [type errors](https://blog.sigmaprime.io/solidity-security.html#short-vuln), and the [delegatecall vulnerability](https://blog.sigmaprime.io/solidity-security.html#dc-example) which [froze $300 million](https://medium.com/chain-cloud-company-blog/parity-multisig-hack-again-b46771eaa838). All of these vulnerabilities are examples of issues which exist at the programming language level. In other words, a better programming language could create more secure smart contracts. + +The challengers are coming. + +* [**Waves RIDE**](https://docs.wavesplatform.com/en/technical-details/ride-language.html): A Turing incomplete (no loops or recursion), Haskell-inspired functional programming language for the Waves blockchain features static types, lazy evaluation, pattern matching, and predicate expressions which determine whether or not a transaction is allowed to complete. A Turing complete version is also in the works. Waves’ smart contracts support is currently live on mainnet. We should see the first Waves dApps appear in 2019. + +* [**Plutus**](https://cardanodocs.com/technical/plutus/introduction/) ([**Cardano**](https://www.cardano.org/en/home/)) is another Haskell-inspired functional programming language, this time for the Cardano blockchain. Cardano is planning two big releases in 2019: Shelley, which provides full decentralization and staking, and Cardano-CL, the virtual machines that will support programmable smart contracts. + +* [**Scilla**](https://scilla-lang.org/) ([**Zilliqa**](https://zilliqa.com/)) is a formally verified smart contract language designed with separation of computation and effects in mind. This means that calculations and communication of state transitions are strictly isolated, which makes Scilla smart contracts easier to test and statically validate to minimize the chances that something will go wrong. Zilliqa’s mainnet is scheduled to launch at the end of January, 2019. + +* [**ewasm**](https://github.com/ewasm/design) (Ethereum) is not a smart contract language per say, but a compiler target which will allow Ethereum programmers to program in other languages (like Rust, C++, maybe one day smart-contract specific languages like [Simplicity](http://chrome-extension://oemmndcbldboiebfnladdacbdfmadadm/https://blockstream.com/simplicity.pdf)), and compile to Ethereum flavored WebAssembly. ewasm is a safer subset of WebAssembly, which is the relatively new low-level compile target for the web platform. Conveniently, wasm (and thus ewasm) modules are usable from any JavaScript project. For most blockchain code, typically more than 75% of the code isn’t in smart contracts at all — it’s in JavaScript which must communicate with the smart contracts. ewasm and JavaScript share a common foundation of bindings and module support. + +* **JavaScript** ([**Lisk**](https://lisk.io/)) Lisk is a blockchain development platform that allows developers to code in JavaScript and create custom blockchains for specific applications, avoiding Ethereum’s big scaling challenge. Lisk allows developers to create their own sidechains to manage all of a specific application’s blockchain operations, so it doesn’t have to compete with all the other applications for the compute resources of the main chain. Currently, Lisk is not working on a smart contract programming language or VM, and blockchain transaction capabilities are similar to Bitcoin’s. + +* [**Rust**](https://www.rust-lang.org/) (via ewasm, Cardano client) is a lower level language (like C) with some of the safety features of languages like Haskell. Rust features guaranteed constant references to avoid accidental mutations, static prevention of null pointer exceptions (options must be explicitly declared), stateful types which only provide access to operations meaningful to the current state, pattern matching is analyzed to guarantee function completeness (an unmatched pattern will result in a compile-time error), etc. Basically, it’s like C++ and Haskell had a baby that inherited none of the scary stuff. Rust can compile to ewasm, or be used to build client code for blockchains like Cardano. Modules for Lisk can be built in Rust and compiled to wasm to import in Lisk projects. + +### You Might Not Need Smart Contracts + +You might not need a smart contract programming language to produce a production dApp in 2019. + +Most dApp developers create nodes that ingest data from blockchains and pull it into a database that can be queried efficiently. That process is not a lot of fun, and adds a lot of maintenance burden to crypto apps. [**The Graph**](https://thegraph.com/) makes it easy to query blockchain data using [**GraphQL**](https://graphql.org/). Decentralized nodes aggregate blockchain data, supported by [**IPFS**](https://ipfs.io/). + +You can send compute jobs to [**iExec**](https://iex.ec/), and even handle intense graphic rendering with the [**Render Token**](https://www.rendertoken.com/)**.** With all these protocol tokens flying around, we might need to do some [**cross chain atomic swaps**](https://arxiv.org/abs/1801.09515) to exchange tokens across multiple blockchains. + +You can use [**verifiable claims**](https://w3c.github.io/vc-use-cases/), batched and anchored to your blockchain of choice (suggestion: Bitcoin) to record any kind of data, including ownership and transfer of assets like real estate, car titles, and NFTs. You can store those claims, supporting files, and various database records (see [**OrbitDB**](https://github.com/orbitdb/orbit-db)) on [**IPFS**](https://ipfs.io/) or [**Storj**](https://storj.io/). + +### The List + +OK, that was a lot. Let’s review the tech you should pay close attention to in 2019: + +#### Cryptocurrencies + +* [**BAT**](https://basicattentiontoken.org/) +* [**Theta**](https://www.thetatoken.org/) +* [**Waves**](https://wavesplatform.com/) +* [**Stellar Lumens**](https://www.stellar.org/) +* [**Zilliqa**](https://zilliqa.com/) + +#### Crypto Apps + +* [**Brave Browser**](https://brave.com/) +* [**Sliver.tv**](https://www.sliver.tv/) +* [**Cent**](https://beta.cent.co/) + +#### Wallets & dApp Browsers + +* [**Trust**](https://play.google.com/store/apps/details?id=com.wallet.crypto.trustapp) +* [**Coinbase Wallet**](https://play.google.com/store/apps/details?id=org.toshi) +* [**Waves Wallet**](https://wavesplatform.com/product) + +#### dApp Platforms + +* [**Ethereum**](https://www.ethereum.org/) +* [**Waves**](https://wavesplatform.com/) +* [**Stellar**](https://www.stellar.org/) +* [**Cardano**](https://www.cardano.org/en/home/) +* [**Zilliqa**](https://zilliqa.com/) +* [**Lisk**](https://lisk.io/) + +#### Smart Contract Languages + +* [**Waves RIDE**](https://docs.wavesplatform.com/en/technical-details/ride-language.html) +* [**Plutus**](https://cardanodocs.com/technical/plutus/introduction/) (Cardano) +* [**Scilla**](https://scilla-lang.org/) (Zilliqa) +* [**Ewasm**](https://github.com/ewasm/design) (Ethereum, others) +* [**Rust**](https://www.rust-lang.org/) (via ewasm, Cardano client) + +#### Decentralized Compute Services (AWS for dApps) + +* [**IPFS**](https://ipfs.io/) +* [**iExec**](https://iex.ec/) +* [**Storj**](https://storj.io/) +* [**OrbitDB**](https://github.com/orbitdb/orbit-db) +* [**The Graph**](https://thegraph.com/) +* [**Render Token**](https://www.rendertoken.com/) + +#### Related Technologies + +* [**Web3 API**](https://github.com/ethereum/wiki/wiki/JavaScript-API) +* [**Lightning Network**](https://lightning.network/) +* [**GraphQL**](https://graphql.org/) +* [**Cross Chain Atomic Swaps**](https://arxiv.org/abs/1801.09515) +* [**Verifiable Claims**](https://w3c.github.io/vc-use-cases/) + +* * * + +> We’re BUIDLing the future of celebrity digital collectables: [cryptobling](https://docs.google.com/forms/d/e/1FAIpQLScrRX9bHdIYbQFI5L3hEgwQaDEdjo8t8glqlyObZexWjssxNQ/viewform). + +* * * + +**_Eric Elliott_** _is a distributed systems expert and author of the books,_ [_“Composing Software”_](https://leanpub.com/composingsoftware) _and_ [_“Programming JavaScript Applications”_](https://ericelliottjs.com/product/programming-javascript-applications-ebook/)_. As co-founder of_ [_DevAnywhere.io_](https://devanywhere.io/)_, he teaches developers the skills they need to work remotely and embrace work/life balance. He builds and advises development teams for crypto projects, and has contributed to software experiences for_ **_Adobe Systems,_** **_Zumba Fitness,_** **_The Wall Street Journal,_** **_ESPN,_** **_BBC,_** _and top recording artists including_ **_Usher, Frank Ocean, Metallica,_** _and many more._ + +_He enjoys a remote lifestyle with the most beautiful woman in the world._ + +Thanks to [JS_Cheerleader](https://medium.com/@JS_Cheerleader?source=post_page). + +> 如果发现译文存在错误或其他需要改进的地方,欢迎到 [掘金翻译计划](https://github.com/xitu/gold-miner) 对译文进行修改并 PR,也可获得相应奖励积分。文章开头的 **本文永久链接** 即为本文在 GitHub 上的 MarkDown 链接。 + + +--- + +> [掘金翻译计划](https://github.com/xitu/gold-miner) 是一个翻译优质互联网技术文章的社区,文章来源为 [掘金](https://juejin.im) 上的英文分享文章。内容覆盖 [Android](https://github.com/xitu/gold-miner#android)、[iOS](https://github.com/xitu/gold-miner#ios)、[前端](https://github.com/xitu/gold-miner#前端)、[后端](https://github.com/xitu/gold-miner#后端)、[区块链](https://github.com/xitu/gold-miner#区块链)、[产品](https://github.com/xitu/gold-miner#产品)、[设计](https://github.com/xitu/gold-miner#设计)、[人工智能](https://github.com/xitu/gold-miner#人工智能)等领域,想要查看更多优质译文请持续关注 [掘金翻译计划](https://github.com/xitu/gold-miner)、[官方微博](http://weibo.com/juejinfanyi)、[知乎专栏](https://zhuanlan.zhihu.com/juejinfanyi)。 From bd1ea33ed589f313fccccfc71d97ae00741cf7b0 Mon Sep 17 00:00:00 2001 From: LeviDing Date: Fri, 4 Jan 2019 22:01:35 +0800 Subject: [PATCH 23/54] Create https-medium-com-netflixtechblog-engineering-to-improve-marketing-effectiveness-part-2.md --- ...-improve-marketing-effectiveness-part-2.md | 151 ++++++++++++++++++ 1 file changed, 151 insertions(+) create mode 100644 TODO1/https-medium-com-netflixtechblog-engineering-to-improve-marketing-effectiveness-part-2.md diff --git a/TODO1/https-medium-com-netflixtechblog-engineering-to-improve-marketing-effectiveness-part-2.md b/TODO1/https-medium-com-netflixtechblog-engineering-to-improve-marketing-effectiveness-part-2.md new file mode 100644 index 00000000000..e8920ebf9e7 --- /dev/null +++ b/TODO1/https-medium-com-netflixtechblog-engineering-to-improve-marketing-effectiveness-part-2.md @@ -0,0 +1,151 @@ +> * 原文地址:[Engineering to Improve Marketing Effectiveness (Part 2) — Scaling Ad Creation and Management](https://medium.com/netflix-techblog/https-medium-com-netflixtechblog-engineering-to-improve-marketing-effectiveness-part-2-7dd933974f5e) +> * 原文作者:[Netflix Technology Blog](https://medium.com/@NetflixTechBlog?source=post_header_lockup) +> * 译文出自:[掘金翻译计划](https://github.com/xitu/gold-miner) +> * 本文永久链接:[https://github.com/xitu/gold-miner/blob/master/TODO1/https-medium-com-netflixtechblog-engineering-to-improve-marketing-effectiveness-part-2.md](https://github.com/xitu/gold-miner/blob/master/TODO1/https-medium-com-netflixtechblog-engineering-to-improve-marketing-effectiveness-part-2.md) +> * 译者: +> * 校对者: + +# Engineering to Improve Marketing Effectiveness (Part 2) — Scaling Ad Creation and Management + +by [Ravi Srinivas Ranganathan](https://www.linkedin.com/in/rravisrinivas), [Gopal Krishnan](https://www.linkedin.com/in/gopal-krishnan-9057a7/) + +> In the [first part](https://medium.com/netflix-techblog/engineering-to-improve-marketing-effectiveness-part-1-a6dd5d02bab7) of this series of blogs, we described our philosophy, motivations, and approach to blending ad technology into our marketing. In addition, we laid out some of the engineering undertakings to solve creative development and localization at scale. + +> In this Part-2, we describe the process of scaling advertising at Netflix through ad assembly and personalization on the various ad platforms that we advertise in. + +### The Problem Surface + +Our world-class marketing team has the unique task of showcasing our growing slate of Original Movies and TV Shows, and the unique stories behind every one of them. Their job is not just about promoting awareness of the content we produce, but an even harder one — of tailoring the right content, with the right message to qualified non-members (acquisition marketing) and members — collectively, billions of users who are reached by our online advertising. These ads will have to reach users on the internet on a variety of websites and publishers, on Facebook, Youtube and other ad platforms. + +Imagine if you had to launch the digital marketing campaign for the next big blockbuster movie or must-watch TV show. You will need to create ads for a variety of creative concepts, A/B tests, ad formats and localizations, then QC (quality control) all of them for technical and content errors. Having taken those variations into consideration, you’ll need to traffic them to the respective platforms that those ads are going to be delivered from. Now, imagine launching multiples titles daily while still ensuring that every single one of these ads reaches the exact person that they are meant to speak to. Finally, you need to continue to manage your portfolio of ads after the campaign launches in order to ensure that they are kept up to date (for eg. music licensing rights and expirations) and continue to support phases that roll in post-launch. + +There are three broad areas that the problem can be broken down into : + +* **Ad Assembly**: A scalable way of producing ads and building workflow automation +* **Creative QC**: Set of tools and services that make it possible to easily QC thousands of ad units for functional and semantic correctness +* **Ad Catalog Management**: Capabilities that make it possible for managing scale campaigns easily — ML based automation + +### What is Ad Assembly? + +Overall, if you looked at the problem from a purely analytical perspective, we need to find a way to efficiently automate and manage the scale resulting from textbook combinatorial explosion. + +**Total Ad Cardinality ≈** + +_Titles in Catalog_ **x** _Ad Platforms_ **x** _Concepts_ **x** _Formats_ **x** _A/B Tests_ **x** _Localizations_ + +Our approach of handling the combinatorics to catch it at the head and to create marketing platforms where our ad operations, the primary users of our product, can concisely express the gamut of variations with the least amount of redundant information. + +![](https://cdn-images-1.medium.com/max/800/1*TWbovfnsSqMJG66KYDQp6w.gif) + +**CREATIVE VARIATIONS IN VIDEO BASED SOCIAL ADS** + +Consider the ads below, which differ along a number of different dimensions that are highlighted. + +![](https://cdn-images-1.medium.com/max/800/0*NQ9dYbl6USSMRXhc) + +**CREATIVE VARIATIONS IN DISPLAY ADS** + +If you were to simply vary just the unique localizations for this ad for all the markets that we advertise in, that would result in ~30 variations. In a world with static ad creation, that means that 30 unique ad files will be produced by marketing and then trafficked. In addition to the higher effort, any change that needs to address all the units would then have to be introduced into each of them separately and then QC-ed all over again. Even a minor modification in just a single creative expression, such as an asset change, would involve making modifications within the ad unit. Each variation would then need to go through the rest of the flow involving, QC and a creative update / re-trafficking. + +Our solve for the above was to build a dynamic ad creation and configuration platform — our ad production partners build a single **_dynamic_** unit and then the associated data configuration is used to modify the behavior of the ad units contextually. Secondly, by providing tools where marketers have to express just the variations and automatically inherit what doesn’t change, we significantly reduce the surface area of data that needs to be defined and managed. + +If you look at the localized versions below, they reused the same fundamental building blocks but got expressed as different creatives based on nothing but configuration. + +![](https://cdn-images-1.medium.com/max/600/0*DqNQBG1sW7cEvPYf) + +**EASY CONFIGURATION OF LOCALIZATIONS** + +This makes it possible to go from 1 => 30 localizations in a matter of minutes instead of hours or even days for every single ad unit! + +We are also able to make the process more seamless by building integrations with a number of useful services to speed up the ad assembly process. For example, we have integrated features like support for maturity ratings, transcoding and compressing video assets or pulling in artwork from our product catalog. Taken together, these conveniences dramatically decrease the level of time effort needed to run campaigns with extremely large footprints. + +### Creative QC + +One major aspect of quality control to ensure that the ad is going to render correctly and free from any technical or visual errors — we call this “functional QC”. Given the breadth of differences amongst various ad types and the kinds of possible issues, here are some of the top-line approaches that we have pursued to improve the state of creative QC. + +First, we have tools that plug in sensible values throughout the ad assembly process and reduce the likelihood of errors. + +Then, we minimize the total volume of QC issues encountered by adding validations and correctness checks throughout the ad assembly process. For eg. we surface a warning when character limits on Facebook video ads are exceeded. + +![](https://cdn-images-1.medium.com/max/800/0*e-_QuY5UR1T24BMR) + +**WARNINGS DURING AD ASSEMBLY** + +Secondly, we run suites of automated tests that help identify if there are any technical issues that are present in the ad unit that may negatively impact either the functionality or cause negative side-effects to the user-experience. + +![](https://cdn-images-1.medium.com/max/800/0*htbGIBapUv-gh_S1) + +**SAMPLE AUTOMATED SCAN FROM A DISPLAY AD** + +Most recently, we’ve started leveraging machine vision to handle some QC tasks. For eg. depending on where an ad needs to be delivered, there might have to be the need to add specific rating images. To verify that the right rating image was applied in the video creation process, we now use an image detection algorithm developed by our Cloud Media Systems team. As the volume of AV centric creatives continues to scale and increase over time, we will be adding more such solutions to our overall workflow. + +![](https://cdn-images-1.medium.com/max/600/0*OF25W7mXzgtEoFj5) + +**SAMPLE RATING IMAGE QC-ED WITH COMPUTER VISION** + +In addition to the functional correctness, we also care a whole lot about semantic QC — i.e for our marketing users to determine if the ads are being true to their creative goals and representing the tone and voice of the content and of the Netflix brand accurately. + +One of the core tenets around which our ad platform is built is immediate updates with live renderings across the board. This, coupled with the fact that our users can identify and make pinpointed updates with broad implications easily, allows them to fix issues as fast as they can find them. Our users are also able to collaborate on creative feedback, reviews much more efficiently by sharing **_tearsheets_** as needed. A tearsheet is a preview of the final ad after it has been locked and is used to get final clearance ahead of launch. + +Given how important this process is to the overall health and success of our advertising campaigns, we’re investing heavily on QC automation infrastructure. We’re also actively working on enabling sophisticated task management, status tracking and notification workflows that help us scale to even higher orders of magnitude in a sustainable way. + +### Ad Catalog Management + +Once the ads are prepared, instead of directly trafficking them as such, we decouple the ad creation, assembly from ad trafficking with a “catalog” layer. + +A catalog picks the sets of ads to run with based on the intent of the campaign — Is it meant for building title awareness or for acquisition marketing? Are we running a campaign for a single movie or show or does it highlight multiple titles or is it a brand-centric asset? Is this a pre-launch campaign or a post-launch campaign? + +Once a definition is assigned by the user, an automated catalog handles the following concerns amongst other things : + +* Uses aggregate first party data and machine-learnt models, user configuration, ad performance data etc. to manage the creatives it delivers +* Automatically makes requests for production of ads that are needed but not available already +* Reacts to changing asset availability, recommendation data, blacklisting etc. +* Simplifies user workflows — management of pre-launch and post-launch phases of the campaign, scheduling content refreshes etc. +* Collects metrics and track asset usage and efficiency + +The catalog is hence a very powerful tool as it optimizes itself and hence the campaign it’s supporting — in effect, it turns our first party data into an “intelligence-layer”. + +### Personalization and A/B Tests + +All of this can add to a sum greater than its parts — for eg. using this technology, we can now run a **_Global Scale Vehicle_** — an always-on / evergreen, auto-optimizing campaigns powered by content performance data and ad performance data. Along with automatic budget allocation algorithms (we’ll discuss it in the next blog post in this series), this tames the operational complexity very effectively. As a result, our marketing users get to focus to building amazing creatives and formulating A/B tests and market plans on their end, and our automated catalogs help to deliver the right creative to the right place in a hands off fashion — automating the ad selection and personalization. + +In order to understand why this is a game changer, let’s reflect on the previous approach — every title that needed to be launched had to involve planning on budgeting, targeting, which regions to support any title in, how long to run and to what spend levels, etc. + +This was a phenomenally hard task in the face of our ever increasing content library, breadth and nuances of marketing to nearly all countries of the world and the number of platforms and formats needing support to reach our addressable audience. Secondly, it was challenging to react fast enough to unexpected variations in creative performance all while also focusing on upcoming campaigns and launches. + +![](https://cdn-images-1.medium.com/max/800/1*TuPBPYY83i85z6vYN7lTsQ.png) + +In true, Netflix fashion, we arrived at this model through a series of A/B tests — originally, we ran several tests learning that an always-on ad catalog with personalized delivery outperformed our previous tentpole launch approach. We then ran many more follow-ups to determine how to do it well on different platforms. As one would imagine, this is fundamentally a process of continuous learning and we’re pleasantly surprised to find huge, successive improvements on our optimization metrics as we’ve continued to run growing number of marketing A/B tests around the world. + +### Service Architecture + +We enable this technology using a number of Java and Groovy based microservices that tap into various NoSQL stores such as Cassandra and Elasticsearch and use Kafka, Hermes to glue the different parts by either transporting data or triggering events that result in [dockerized micro-applications](https://medium.com/netflix-techblog/the-evolution-of-container-usage-at-netflix-3abfc096781b) getting invoked on [Titus](https://medium.com/netflix-techblog/titus-the-netflix-container-management-platform-is-now-open-source-f868c9fb5436). + +![](https://cdn-images-1.medium.com/max/800/1*6_BrSaP_JSBsJPZP0RPGzA.png) + +![](https://cdn-images-1.medium.com/max/600/1*H6bB68gFOfg3mjQ672j5xQ.png) + +We use [RxJava](https://github.com/ReactiveX/RxJava) fairly heavily and the ad server which handles real-time requests for servicing display and VAST videos uses RxNetty as it’s application framework and it offers customizability while bringing minimal features and associated overheads. For the ads middle tier application server, we use a Tomcat / Jersey / Guice powered service as it offers way more features and easy integrations for it’s concerns such as easy authentication and authorization, out-of-the-box support for Netflix’s cloud ecosystem as we lack of strict latency and throughput constraints. + +### Future + +Although we’ve had the opportunity to build a lot of technology in the last few years, the practical reality is that our work is far from done. + +We’ve had a high degree of progress on some ad platforms, we’re barely getting started on others and there’s some we aren’t even ready to think of, just yet. On some, we’ve hit the entirety of ad creation, assembly and management and QC, on others, we’ve not even scratched the full surface of just plain assembly. + +Automation and machine learning have gotten us pretty far — but our organizational appetite for doing more and doing better is far outpacing the speed with which can build these systems. With every A/B test having us think of more avenues of exploration and in using data to power analysis and prediction in various aspects of our ad workflows, we’ve got a lot of interesting challenges to look forward to. + +### Closing + +In summary, we’ve discussed how we build unique ad technology that helps us add both scale and add intelligence into advertising efforts. Some of the details themselves are worth follow-up posts on and we’ll be publishing them in the future. + +To further our marketing technology journey, we’ll have the next blog shortly that moves the story forward towards how we support marketing analytics from a variety of platforms and make it possible to compare proverbial apples and oranges and use it to optimize campaign spend. + +If you’re interested in joining us in working on some of these opportunities within Netflix’s Marketing Tech, [**we’re hiring**](https://sites.google.com/netflix.com/adtechjobs/ad-tech-engineering)**!** :) + +> 如果发现译文存在错误或其他需要改进的地方,欢迎到 [掘金翻译计划](https://github.com/xitu/gold-miner) 对译文进行修改并 PR,也可获得相应奖励积分。文章开头的 **本文永久链接** 即为本文在 GitHub 上的 MarkDown 链接。 + + +--- + +> [掘金翻译计划](https://github.com/xitu/gold-miner) 是一个翻译优质互联网技术文章的社区,文章来源为 [掘金](https://juejin.im) 上的英文分享文章。内容覆盖 [Android](https://github.com/xitu/gold-miner#android)、[iOS](https://github.com/xitu/gold-miner#ios)、[前端](https://github.com/xitu/gold-miner#前端)、[后端](https://github.com/xitu/gold-miner#后端)、[区块链](https://github.com/xitu/gold-miner#区块链)、[产品](https://github.com/xitu/gold-miner#产品)、[设计](https://github.com/xitu/gold-miner#设计)、[人工智能](https://github.com/xitu/gold-miner#人工智能)等领域,想要查看更多优质译文请持续关注 [掘金翻译计划](https://github.com/xitu/gold-miner)、[官方微博](http://weibo.com/juejinfanyi)、[知乎专栏](https://zhuanlan.zhihu.com/juejinfanyi)。 From 5336bdf4c13a0054c078e6bd3c792619089fd220 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E8=B5=B5=E5=B0=8F=E7=94=9F?= Date: Mon, 7 Jan 2019 15:23:01 +0800 Subject: [PATCH 24/54] =?UTF-8?q?=E5=86=8D=E7=9C=8B=20Flask=20=E8=A7=86?= =?UTF-8?q?=E9=A2=91=E6=B5=81=20(#4889)?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * 翻译完成 * 校对后的修改 * Update flask-video-streaming-revisited.md * Update flask-video-streaming-revisited.md --- TODO1/flask-video-streaming-revisited.md | 128 +++++++++++------------ 1 file changed, 64 insertions(+), 64 deletions(-) diff --git a/TODO1/flask-video-streaming-revisited.md b/TODO1/flask-video-streaming-revisited.md index 98ca5002c3c..f6d3a8a133c 100644 --- a/TODO1/flask-video-streaming-revisited.md +++ b/TODO1/flask-video-streaming-revisited.md @@ -2,33 +2,33 @@ > * 原文作者:[Miguel Grinberg](https://blog.miguelgrinberg.com) > * 译文出自:[掘金翻译计划](https://github.com/xitu/gold-miner) > * 本文永久链接:[https://github.com/xitu/gold-miner/blob/master/TODO1/flask-video-streaming-revisited.md](https://github.com/xitu/gold-miner/blob/master/TODO1/flask-video-streaming-revisited.md) -> * 译者: -> * 校对者: +> * 译者:[zhmhhu](https://github.com/zhmhhu) +> * 校对者:[1992chenlu](https://github.com/1992chenlu) -# Flask Video Streaming Revisited +# 再看 Flask 视频流 ![](https://blog.miguelgrinberg.com/static/images/video-streaming-revisited.jpg) -Almost three years ago I wrote an article on this blog titled [Video Streaming with Flask](https://juejin.im/post/5bea86fc518825158c531e9c), in which I presented a very modest streaming server that used a Flask generator view function to stream a [Motion-JPEG](https://en.wikipedia.org/wiki/Motion_JPEG) stream to web browsers. My intention with that article was to show a simple, yet practical use of [streaming responses](http://flask.pocoo.org/docs/0.12/patterns/streaming/), a not very well known feature in Flask. +大约三年前,我在这个名为 [Video Streaming with Flask](https://juejin.im/post/5bea86fc518825158c531e9c) 的博客上写了一篇文章,其中我提出了一个非常实用的流媒体服务器,它使用 Flask 生成器视图函数将 [Motion-JPEG](https://en.wikipedia.org/wiki/Motion_JPEG) 流传输到 Web 浏览器。在那片文章中,我的意图是展示简单而实用的[流式响应](http://flask.pocoo.org/docs/0.12/patterns/streaming/),这是 Flask 中一个不为人知的特性。 -That article is extremely popular, but not because it teaches how to implement streaming responses, but because a lot of people want to implement streaming video servers. Unfortunately, my focus when I wrote the article was not on creating a robust video server, so I frequently get questions and requests for advice from those who want to use the video server for a real application and quickly find its limitations. So today I'm going to revisit my streaming video server and describe a few improvements I've made to it. +那篇文章非常受欢迎,倒并不是因为它教会了读者如何实现流式响应,而是因为很多人都希望实现流媒体视频服务器。不幸的是,当我撰写文章时,我的重点不在于创建一个强大的视频服务器所以我经常收到读者的提问及寻求建议的请求,他们想要将视频服务器用于实际应用程序,但很快发现了它的局限性。 -## Recap: Using Flask's Streaming for Video +## 回顾:使用 Flask 的视频流 -I recommend you read the [original article](https://blog.miguelgrinberg.com/post/video-streaming-with-flask) to familiarize yourself with my project. In short, this is a Flask server that uses a streaming response to provide a stream of video frames captured from a camera in Motion JPEG format. This format is very simple and not the most efficient, but has the advantage that all browsers support it natively and without any client-side scripting required. It is a fairly common format used by security cameras for that reason. To demonstrate the server, I implemented a camera driver for a Raspberry Pi with its camera module. For those that didn't have a Pi with a camera at hand, I also wrote an emulated camera driver that streams a sequence of jpeg images stored on disk. +我建议您阅读[原始文章](https://blog.miguelgrinberg.com/post/video-streaming-with-flask)以熟悉我的项目。简而言之,这是一个 Flask 服务器,它使用流式响应来提供从 Motion JPEG 格式的摄像机捕获的视频帧流。这种格式非常简单,虽然并不是最有效的,它具有以下优点:所有浏览器都原生支持它,无需任何客户端脚本。出于这个原因,它是安防摄像机使用的一种相当常见的格式。为了演示服务器,我使用相机模块为树莓派编写了一个相机驱动程序。对于那些没有没有树莓派,只有手持相机的人,我还写了一个模拟的相机驱动程序,它可以传输存储在磁盘上的一系列 jpeg 图像。 -## Running the Camera Only When There Are Viewers +## 仅在有观看者时运行相机 -One aspect of the original streaming server that people did not like is that the background thread that captures video frames from the Raspberry Pi camera starts when the first client connects to the stream, but then it never stops. A more efficient way to handle this background thread is to only have it running while there are viewers, so that the camera can be turned off when nobody is connected. +人们不喜欢的原始流媒体服务器的一个原因是,当第一个客户端连接到流时,从树莓派的摄像头捕获视频帧的后台线程就开始了,但之后它永远不会停止。处理此后台线程的一种更有效的方法是仅在有查看者的情况下使其运行,以便在没有人连接时可以关闭相机。 -I implemented this improvement a while ago. The idea is that every time a frame is accessed by a client the current time of that access is recorded. The camera thread checks this timestamp and if it finds it is more than ten seconds old it exits. With this change, when the server runs for ten seconds without any clients it will shut its camera off and stop all background activity. As soon as a client connects again the thread is restarted. +我刚刚实施了这项改进。这个想法是,每次客户端访问视频帧时,都会记录该访问的当前时间。相机线程检查此时间戳,如果发现它超过十秒,则退出。通过此更改,当服务器在没有任何客户端的情况下运行十秒钟时,它将关闭其相机并停止所有后台活动。一旦客户端再次连接,线程就会重新启动。 -Here is a brief description of the changes: +以下是对这项改进的简要说明: ``` class Camera(object): # ... - last_access = 0 # time of last client access to the camera + last_access = 0 # 最后一个客户端访问相机的时间 # ... @@ -42,24 +42,24 @@ class Camera(object): # ... for foo in camera.capture_continuous(stream, 'jpeg', use_video_port=True): # ... - # if there hasn't been any clients asking for frames in - # the last 10 seconds stop the thread + # 如果没有任何客户端访问视屏帧 + # 10 秒钟之后停止线程 if time.time() - cls.last_access > 10: break cls.thread = None ``` -## Simplifying the Camera Class +## 简化相机类 -A common problem that a lot of people mentioned to me is that it is hard to add support for other cameras. The `Camera` class that I implemented for the Raspberry Pi is fairly complex because it uses a background capture thread to talk to the camera hardware. +很多人向我提到的一个常见问题是很难添加对其他相机的支持。我为树莓派实现的 `Camera` 类相当复杂,因为它使用后台捕获线程与相机硬件通信。 -To make this easier, I decided to move the generic functionality that does all the background processing of frames to a base class, leaving only the task of getting the frames from the camera to implement in subclasses. The new `BaseCamera` class in module `base_camera.py` implements this base class. Here is what this generic thread looks like: +为了使它更容易,我决定将对于帧的所有后台处理的通用功能移动到基类,只留下从相机获取帧以在子类中实现的任务。模块 `base_camera.py` 中的新 `BaseCamera` 类实现了这个基类。以下是这个通用线程的样子: ``` class BaseCamera(object): - thread = None # background thread that reads frames from camera - frame = None # current frame is stored here by background thread - last_access = 0 # time of last client access to the camera + thread = None # 从摄像机读取帧的后台线程 + frame = None # 后台线程将当前帧存储在此 + last_access = 0 # 最后一个客户端访问摄像机的时间 # ... @staticmethod @@ -75,8 +75,8 @@ class BaseCamera(object): for frame in frames_iterator: BaseCamera.frame = frame - # if there hasn't been any clients asking for frames in - # the last 10 seconds then stop the thread + # 如果没有任何客户端访问视屏帧 + # 10 秒钟之后停止线程 if time.time() - BaseCamera.last_access > 10: frames_iterator.close() print('Stopping camera thread due to inactivity.') @@ -84,14 +84,14 @@ class BaseCamera(object): BaseCamera.thread = None ``` -This new version of the Raspberry Pi's camera thread has been made generic with the use of yet another generator. The thread expects the `frames()` method (which is a static method) to be a generator implemented in subclasses that are specific to different cameras. Each item returned by the iterator must be a video frame, in jpeg format. +这个新版本的树莓派的相机线程使用了另一个生成器而变得通用了。线程期望 `frames()` 方法(这是一个静态方法)成为一个生成器,这个生成器在特定的不同摄像机的子类中实现。迭代器返回的每个项目必须是 jpeg 格式的视频帧。 -Here is how the emulated camera that returns static images can be adapted to work with this base class: +以下展示的是返回静态图像的模拟摄像机如何适应此基类: ``` class Camera(BaseCamera): - """An emulated camera implementation that streams a repeated sequence of - files 1.jpg, 2.jpg and 3.jpg at a rate of one frame per second.""" + """模拟相机的实现过程,将 +     文件1.jpg,2.jpg和3.jpg形成的重复序列以每秒一帧的速度以流式文件的形式传输。""" imgs = [open(f + '.jpg', 'rb').read() for f in ['1', '2', '3']] @staticmethod @@ -101,9 +101,9 @@ class Camera(BaseCamera): yield Camera.imgs[int(time.time()) % 3] ``` -Note how in this version the `frames()` generator forces a frame rate of one frame per second by simply sleeping that amount between frames. +注意在这个版本中,`frames()` 生成器如何通过简单地在帧之间休眠来形成每秒一帧的速率。 -The camera subclass for the Raspberry Pi camera also becomes much simpler with this redesign: +通过重新设计,树莓派相机的相机子类也变得更加简单: ``` import io @@ -128,9 +128,9 @@ class Camera(BaseCamera): stream.truncate() ``` -## OpenCV Camera Driver +## OpenCV 相机驱动 -A fair number of users complained that they did not have access to a Raspberry Pi equipped with a camera module, so they could not try this server with anything other than the emulated camera. Now that adding camera drivers is much easier, I wanted to also have a camera based on [OpenCV](http://opencv.org/), which supports most USB webcams and laptop cameras. Here is a simple camera driver for it: +很多用户抱怨他们无法访问配备相机模块的树莓派,因此除了模拟相机之外,他们无法尝试使用此服务器。现在添加相机驱动程序要容易得多,我想要一个基于 [OpenCV](http://opencv.org/) 的相机,它支持大多数 USB 网络摄像头和笔记本电脑相机。这是一个简单的相机驱动程序: ``` import cv2 @@ -144,24 +144,24 @@ class Camera(BaseCamera): raise RuntimeError('Could not start camera.') while True: - # read current frame + # 读取当前帧 _, img = camera.read() - # encode as a jpeg image and return it + # 编码成一个 jpeg 图片并且返回 yield cv2.imencode('.jpg', img)[1].tobytes() ``` -With this class, the first video camera reported by your system will be used. If you are using a laptop, this is likely your internal camera. If you are going to use this driver, you need to install the OpenCV bindings for Python: +使用此类,将使用您系统检测到的第一台摄像机。如果您使用的是笔记本电脑,这可能是您的内置摄像头。如果要使用此驱动程序,则需要为 Python 安装 OpenCV 绑定: ``` $ pip install opencv-python ``` -## Camera Selection +## 相机选择 -The project now supports three different camera drivers: emulated, Raspberry Pi and OpenCV. To make it easier to select which driver to use without having to edit the code, the Flask server looks for a `CAMERA` environment variable to know which class to import. This variable can be set to `pi` or `opencv`, and if it isn't set, then the emulated camera is used by default. +该项目现在支持三种不同的摄像头驱动程序:模拟、树莓派和 OpenCV。为了更容易选择使用哪个驱动程序而不必编辑代码,Flask 服务器查找 `CAMERA` 环境变量以了解要导入的类。此变量可以设置为 `pi` 或 `opencv`,如果未设置,则默认使用模拟摄像机。 -The way this is implemented is fairly generic. Whatever the value of the `CAMERA` environment variable is, the server will expect the driver to be in a module named `camera_$CAMERA.py`. The server will import this module and then look for a `Camera` class in it. The logic is actually quite simple: +实现它的方式非常通用。无论 `CAMERA` 环境变量的值是什么,服务器都希望驱动程序位于名为 `camera_$CAMERA.py` 的模块中。服务器将导入该模块,然后在其中查找 `Camera`类。逻辑实际上非常简单: ``` from importlib import import_module @@ -174,30 +174,30 @@ else: from camera import Camera ``` -For example, to start an OpenCV session from bash, you can do this: +例如,要从 bash 启动 OpenCV 会话,你可以执行以下操作: ``` $ CAMERA=opencv python app.py ``` -From a Windows command prompt you can do the same as follows: +使用 Windows 命令提示符,你可以执行以下操作: ``` $ set CAMERA=opencv $ python app.py ``` -## Performance Improvements +## 性能优化 -Another observation that was made a few times is that the server consumes a lot of CPU. The reason for this is that there is no synchronization between the background thread capturing frames and the generator feeding those frames to the client. Both run as fast as they can, without regards for the speed of the other. +在另外几次观察中,我们发现服务器消耗了大量的 CPU。其原因在于后台线程捕获帧与将这些帧回送到客户端的生成器之间没有同步。两者都尽可能快地运行,而不考虑另一方的速度。 -In general it makes sense for the background thread to run as fast as possible, because you want the frame rate to be as high as possible for each client. But you definitely do not want the generator that delivers frames to a client to ever run at a faster rate than the camera is producing frames, because that would mean duplicate frames will be sent to the client. While these duplicates do not cause any problems, they increase CPU and network usage without any benefit. +通常,后台线程尽可能快地运行是有道理的,因为你希望每个客户端的帧速率尽可能高。但是你绝对不希望向客户端提供帧的生成器以比生成帧的相机更快的速度运行,因为这意味着将重复的帧发送到客户端。虽然这些重复项不会导致任何问题,但它们除了增加 CPU 和网络负载之外没有任何好处。 -So there needs to be a mechanism by which the generator only delivers original frames to the client, and if the delivery loop inside the generator is faster than the frame rate of the camera thread, then the generator should wait until a new frame is available, so that it paces itself to match the camera rate. On the other side, if the delivery loop runs at a slower rate than the camera thread, then it should never get behind when processing frames, and instead it should skip frames to always deliver the most current frame. Sounds complicated, right? +因此需要一种机制,通过该机制,生成器仅将原始帧传递给客户端,并且如果生成器内的传送回路比相机线程的帧速率快,则生成器应该等待直到新帧可用,所以它应该自行调整以匹配相机速率。另一方面,如果传送回路以比相机线程更慢的速率运行,那么它在处理帧时永远不应该落后,而应该跳过某些帧以始终传递最新的帧。听起来很复杂吧? -What I wanted as a solution here is to have the camera thread signal the generators that are running when a new frame is available. The generators can then block while they wait for the signal before they deliver the next frame. In looking through synchronization primitives, I've found that [threading.Event](https://docs.python.org/3.6/library/threading.html#event-objects) is the one that matches this behavior. So basically, each generator should have an event object, and then the camera thread should signal all the active event objects to inform all the running generators when a new frame is available. The generators deliver the frame and reset their event objects, and then go back to wait on them again for the next frame. +我想要的解决方案是,当新帧可用时,让相机线程信号通知生成器运行。然后,生成器可以在它们传送下一帧之前等待信号时阻塞。在查看同步单元时,我发现 [threading.Event](https://docs.python.org/3.6/library/threading.html#event-objects) 是匹配此行为的函数。所以,基本上每个生成器都应该有一个事件对象,然后摄像机线程应该发出信号通知所有活动事件对象,以便在新帧可用时通知所有正在运行的生成器。生成器传递帧并重置其事件对象,然后等待它们再次进行下一帧。 -To avoid having to add event handling logic in the generator, I decided to implement a customized event class that uses the thread id of the caller to automatically create and manage a separate event for each client thread. This is somewhat complex, to be honest, but the idea came from how Flask's context local variables are implemented. The new event class is called `CameraEvent`, and has `wait()`, `set()`, and `clear()` methods. With the support of this class, the rate control mechanism can be added to the `BaseCamera` class: +为了避免在生成器中添加事件处理逻辑,我决定实现一个自定义事件类,该事件类使用调用者的线程 id 为每个客户端线程自动创建和管理单独的事件。说实话,这有点复杂,但这个想法来自于 Flask 的上下文局部变量是如何实现的。新的事件类称为 `CameraEvent`,并具有 `wait()`、`set()` 和 `clear()` 方法。在此类的支持下,可以将速率控制机制添加到 `BaseCamera` 类: ``` class CameraEvent(object): @@ -210,7 +210,7 @@ class BaseCamera(object): # ... def get_frame(self): - """Return the current camera frame.""" + """返回相机的当前帧.""" BaseCamera.last_access = time.time() # wait for a signal from the camera thread @@ -229,27 +229,27 @@ class BaseCamera(object): # ... ``` -The magic that is done in the `CameraEvent` class enables multiple clients to be able to wait individually for a new frame. The `wait()` method uses the current thread id to allocate an individual event object for each client and wait on it. The `clear()` method will reset the event associated with the caller's thread id, so that each generator thread can run at its own speed. The `set()` method called by the camera thread sends a signal to the event objects allocated for all clients, and will also remove any events that aren't being serviced by their owners, because that means that the clients associated with those events have closed the connection and are gone. You can see the implementation of the `CameraEvent` class in the [GitHub repository](https://github.com/miguelgrinberg/flask-video-streaming/blob/master/base_camera.py). +在 `CameraEvent` 类中完成的魔法操作使多个客户端能够单独等待新的帧。`wait()` 方法使用当前线程 id 为每个客户端分配单独的事件对象并等待它。`clear()` 方法将重置与调用者的线程 id 相关联的事件,以便每个生成器线程可以以它自己的速度运行。相机线程调用的 `set()` 方法向分配给所有客户端的事件对象发送信号,并且还将删除未提供服务的任何事件,因为这意味着与这些事件关联的客户端已关闭,客户端本身也不存在了。您可以在 [GitHub 仓库](https://github.com/miguelgrinberg/flask-video-streaming/blob/master/base_camera.py)中看到 `CameraEvent` 类的实现。 -To give you an idea of the magnitude of the performance improvement, consider that the emulated camera driver consumed about 96% CPU before this change because it was constantly sending duplicate frames at a rate much higher than the one frame per second being produced. After these changes, the same stream consumes about 3% CPU. In both cases there was a single client viewing the stream. The OpenCV driver went from about 45% CPU down to 12% for a single client, with each new client adding about 3%. +为了让您了解性能改进的程度,请看一下,模拟相机驱动程序在此更改之前消耗了大约 96% 的 CPU,因为它始终以远高于每秒生成一帧的速率发送重复帧。在这些更改之后,相同的流消耗大约 3% 的CPU。在这两种情况下,都只有一个客户端查看视频流。OpenCV 驱动程序从单个客户端的大约 45% CPU 降低到 12%,每个新客户端增加约 3%。 -## Production Web Server +## 部署 Web 服务器 -Lastly, I think if you plan to use this server for real, you should use a more robust web server than the one that comes with Flask. A very good choice is to use Gunicorn: +最后,我认为如果您打算真正使用此服务器,您应该使用比 Flask 附带的服务器更强大的 Web服务器。一个很好的选择是使用 Gunicorn: ``` $ pip install gunicorn ``` -With Gunicorn, you can run the server as follows (remember to set the `CAMERA` environment variable to the selected camera driver first): +有了 Gunicorn,您可以按如下方式运行服务器(请记住首先将 `CAMERA` 环境变量设置为所选的摄像头驱动程序): ``` $ gunicorn --threads 5 --workers 1 --bind 0.0.0.0:5000 app:app ``` -The `--threads 5` option tells Gunicorn to handle at most five concurrent requests. That means that with this number you can get up to five clients to watch the stream simultaneously. The `--workers 1` options limits the server to a single process. This is required because only one process can connect to a camera to capture frames. +`--threads 5` 选项告诉 Gunicorn 最多处理五个并发请求。这意味着设置了这个值之后,您最多可以同时拥有五个客户端来观看视频流。`--workers 1` 选项将服务器限制为单个进程。这是必需的,因为只有一个进程可以连接到摄像头以捕获帧。 -You can increase the number of threads some, but if you find that you need a large number, it will probably be more efficient to use an asynchronous framework instead of threads. Gunicorn can be configured to work with the two frameworks that are compatible with Flask: gevent and eventlet. To make the video streaming server work with these frameworks, there is one small addition to the camera background thread: +您可以增加一些线程数,但如果您发现需要大量线程,则使用异步框架比使用线程可能会更有效。可以将 Gunicorn 配置为使用与 Flask 兼容的两个框架:gevent 和 eventlet。为了使视频流服务器能够使用这些框架,相机后台线程还有一个小的补充: ``` class BaseCamera(object): @@ -264,33 +264,33 @@ class BaseCamera(object): # ... ``` -The only change here is the addition of a `sleep(0)` in the camera capture loop. This is required for both eventlet and gevent, because they use cooperative multitasking. The way these frameworks achieve concurrency is by having each task release the CPU either by calling a function that does network I/O or explicitly. Since there is no I/O here, the sleep call is what achieves the CPU release. +这里唯一的变化是在摄像头捕获循环中添加了 `sleep(0)`。这对于 eventlet 和 gevent 都是必需的,因为它们使用协作式多任务处理。这些框架实现并发的方式是让每个任务通过调用执行网络 I/O 的函数或显式执行以释放 CPU。由于此处没有 I/O,因此执行 sleep 函数以实现释放 CPU 的目的。 -Now you can run Gunicorn with the gevent or eventlet workers as follows: +现在您可以使用 gevent 或 eventlet worker 运行 Gunicorn,如下所示: ``` $ CAMERA=opencv gunicorn --worker-class gevent --workers 1 --bind 0.0.0.0:5000 app:app ``` -Here the `--worker-class gevent` option configures Gunicorn to use the gevent framework (you must install it with `pip install gevent`). If you prefer, `--worker-class eventlet` is also available. The `--workers 1` limits to a single process as above. The eventlet and gevent workers in Gunicorn allocate a thousand concurrent clients by default, so that should be much more than what a server of this kind is able to support anyway. +这里的 `--worker-class gevent` 选项配置 Gunicorn 使用 gevent 框架(你必须用`pip install gevent`安装它)。如果你愿意,也可以使用 `--worker-class eventlet`。如上所述,`--workers 1` 限制为单个处理过程。Gunicorn 中的 eventlet 和 gevent workers 默认分配了一千个并发客户端,所以这应该超过了这种服务器能够支持的客户端数量。 -## Conclusion +## 结论 -All the changes described above are incorporated in the [GitHub repository](https://github.com/miguelgrinberg/flask-video-streaming). I hope you get a better experience with these improvements. +上述所有更改都包含在 [GitHub 仓库](https://github.com/miguelgrinberg/flask-video-streaming) 中。我希望你通过这些改进以获得更好的体验。 -Before concluding, I want to provide quick answers to other questions I have received about this server: +在结束之前,我想提供有关此服务器的其他问题的快速解答: -* How to force the server to run at a fixed frame rate? Configure your camera to deliver frames at that rate, then sleep enough time during each iteration of the camera capture loop to also run at that rate. +* 如何设定服务器以固定的帧速率运行?配置您的相机以该速率传送帧,然后在相机传送回路的每次迭代期间休眠足够的时间以便以该速率运行。 -* How to increase the frame rate? The server as described here delivers frames as fast as possible. If you need better frame rates, you can try configuring your camera for a smaller frame size. +* 如何提高帧速率?我在此描述的服务器,以尽可能快的速率提供视频帧。如果您需要更好的帧速率,可以尝试将相机配置成更小的视频帧。 -* How to add sound? That's really difficult. The Motion JPEG format does not support audio. You are going to need to stream the audio separately, and then add an audio player to the HTML page. Even if you manage to do all this, synchronization between audio and video is not going to be very accurate. +如何添加声音?那真的很难。Motion JPEG 格式不支持音频。你将需要使用单独的流传输音频,然后将音频播放器添加到 HTML 页面。即使你设法完成了所有的操作,音频和视频之间的同步也不会非常准确。 -* How to save the stream to disk on the server? Just save the sequence of JPEG files in the camera thread. For this you may want to remove the automatic mechanism that ends the background thread when there are no viewers. +如何将流保存到服务器上的磁盘中?只需将 JPEG 文件的序列保存在相机线程中即可。为此,你可能希望移除在没有查看器时结束后台线程的自动机制。 -* How to add playback controls to the video player? Motion JPEG was not made for interactive operation by the user, but if you are set on doing this, with a little bit of trickery it may be possible to implement playback controls. If the server saves all jpeg images, then a pause can be implemented by having the server deliver the same frame over and over. When the user resumes playback, the server will have to deliver "old" images that are loaded from disk, since now the user would be in DVR mode instead of watching the stream live. This could be a very interesting project! +如何将播放控件添加到视频播放器?Motion JPEG 不允许用户进行交互式操作,但如果你想要这个功能,只需要一点点技巧就可以实现播放控制。如果服务器保存所有 jpeg 图像,则可以通过让服务器一遍又一遍地传送相同的帧来实现暂停。当用户恢复播放时,服务器将必须提供从磁盘加载的“旧”图像,因为现在用户处于 DVR 模式而不是实时观看流。这可能是一个非常有趣的项目! -That is all for now. If you have other questions please let me know! +以上就是本文的所有内容。如果你有其他问题,请告诉我们! > 如果发现译文存在错误或其他需要改进的地方,欢迎到 [掘金翻译计划](https://github.com/xitu/gold-miner) 对译文进行修改并 PR,也可获得相应奖励积分。文章开头的 **本文永久链接** 即为本文在 GitHub 上的 MarkDown 链接。 From d1ab5e81300c1b08a93293c28580919c4bea633b Mon Sep 17 00:00:00 2001 From: Tom Huang Date: Mon, 7 Jan 2019 15:32:01 +0800 Subject: [PATCH 25/54] =?UTF-8?q?=E7=8A=B6=E6=80=81=E6=81=A2=E5=A4=8D?= =?UTF-8?q?=E5=85=A5=E9=97=A8=E6=95=99=E7=A8=8B=20(#4921)?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * 状态恢复入门教程 * state-restoration-tutorial-getting-started: 根据review反馈修改文案 * Update state-restoration-tutorial-getting-started.md --- ...te-restoration-tutorial-getting-started.md | 177 +++++++++--------- 1 file changed, 90 insertions(+), 87 deletions(-) diff --git a/TODO1/state-restoration-tutorial-getting-started.md b/TODO1/state-restoration-tutorial-getting-started.md index 48241dc3ddc..49b73cd0e04 100644 --- a/TODO1/state-restoration-tutorial-getting-started.md +++ b/TODO1/state-restoration-tutorial-getting-started.md @@ -2,62 +2,62 @@ > * 原文作者:[Luke Parham](https://www.raywenderlich.com/1395-state-restoration-tutorial-getting-started) > * 译文出自:[掘金翻译计划](https://github.com/xitu/gold-miner) > * 本文永久链接:[https://github.com/xitu/gold-miner/blob/master/TODO1/state-restoration-tutorial-getting-started.md](https://github.com/xitu/gold-miner/blob/master/TODO1/state-restoration-tutorial-getting-started.md) -> * 译者: -> * 校对者: +> * 译者:[nanjingboy](https://github.com/nanjingboy) +> * 校对者:[chausson](https://github.com/chausson) -# State Restoration Tutorial: Getting Started +# 状态恢复入门教程 -In this state restoration tutorial, learn how to use Apple’s State Restoration APIs to enhance a user’s experience of your app. +在这篇状态恢复教程中,我们将了解如何使用 Apple 的状态恢复接口来提升用户的应用体验。 -_Note_: Updated for Xcode 7.3, iOS 9.3, and Swift 2.2 on 04-03-2016 +**注意**:Xcode 7.3、iOS 9.3 和 Swift 2.2 已于 2016-04-03 更新。 -State restoration is an often-overlooked feature in iOS that lets a user return to their app in the exact state in which they left it – regardless of what’s happened behind the scenes. +在 iOS 系统中,状态恢复机制是一个经常被忽略的特性,当用户再次打开 app 的时候,它能够精确的恢复到退出之前的状态 - 而不用关心发生了什么。 -At some point, the operating system may need to remove your app from memory; this could significantly interrupt your user’s workflow. Your user also shouldn’t have to worry about switching to another app and losing all their work. This is where state restoration saves the day. +某些时候,操作系统可能需要从内存中删除你的应用;这可能会严重中断用户的工作流。你的用户再也不必担心因为切换到另一个应用而影响到工作的事情了。这就是状态恢复机制所起到的作用。 -In this state restoration tutorial, you’ll update an existing app to add preservation and restoration functionalities and enhance the user experience for scenarios where their workflow is likely to be interrupted. +在这篇恢复教程中,你将更新现有应用以添加保留和恢复功能,并在其工作流可能被中断的情况下提升用户体验。 -## Getting Started +## 入门 -Download the [starter project](https://koenig-media.raywenderlich.com/uploads/2016/01/PetFinder-Starter.zip) for this tutorial. The app is named _Pet Finder_; it’s a handy app for people who happen to be seeking the companionship of a furry feline friend. +下载本教程的 [入门项目](https://koenig-media.raywenderlich.com/uploads/2016/01/PetFinder-Starter.zip)。该应用名为「**Pet Finder**」;对于那些碰巧在寻找毛茸茸猫科动物陪伴的人来说,这是一款方便的应用。 -Run the app in the simulator; you’ll be presented with an image of a cat that’s eligible for adoption: +运行该应用;你将会看到一张关于猫的图片,这代表你有机会可以领养它: [![Pet Finder](https://koenig-media.raywenderlich.com/uploads/2015/11/petfinder_intro_1-281x500.png)](https://koenig-media.raywenderlich.com/uploads/2015/11/petfinder_intro_1.png) -Swipe right to be paired up with your new furry friend; swipe left to indicate you’d rather pass on this ball of fluff. You can view a list of all your current matches from the _Matches_ tab bar: +向右滑动即可与新的毛茸茸的朋友配对;向左滑动表示你想要继续传递这个绒毛球小猫。你可以从**匹配**选项卡栏中查看当前所有的匹配列表: [![Matches](https://koenig-media.raywenderlich.com/uploads/2015/10/petfinder_matches_1-281x500.png)](https://koenig-media.raywenderlich.com/uploads/2015/10/petfinder_matches_1.png) -Tap to view more details about a selected friend: +点击来查看所选中朋友的更多详细信息: [![Details](https://koenig-media.raywenderlich.com/uploads/2015/10/petfinder_details_1-282x500.png)](https://koenig-media.raywenderlich.com/uploads/2015/10/petfinder_details_1.png) -You can even edit your new friend’s name (or age, if you’re into bending the truth): +你甚至可以编辑你新朋友的名字(或年龄,如果你是在扭曲事实): [![Edit](https://koenig-media.raywenderlich.com/uploads/2015/10/petfinder_edit_2-281x500.png)](https://koenig-media.raywenderlich.com/uploads/2015/10/petfinder_edit_2.png) -You’d hope that when you leave this app annd return to it later, you’d be brought back to the same furry friend you were last viewing. But is this truly the case with this app? The only way to tell is to test it. +你希望当你离开该应用然后返回时,你会被带回到上一次查看的同一个毛茸茸朋友。但真的是这样吗?要知道答案的唯一方法就是测试它。 -## Testing State Restoration +## 状态恢复测试 -Run the app, swipe right on at least one cat, view your matches, then select one cat to view his or her details. Press _Cmd+Shift+H_ to return to the home screen. Any state preservation logic, should it exist, would run at this point. +运行应用,向右滑动至少一只猫,查看你的匹配项,然后选择一只猫并查看他或她的详细信息。按组合键 **Cmd + Shift + H** 返回主页面。如果存在任何逻辑上的状态,它都会被保存并且都将在此时运行。 -Next, stop the app from Xcode: +接下来,通过 Xcode 停止应用: [![Stop App](https://koenig-media.raywenderlich.com/uploads/2015/10/petfinder_stop_app-480x41.png)](https://koenig-media.raywenderlich.com/uploads/2015/10/petfinder_stop_app.png) -The state restoration framework intentionally discards any state information when the user manually kills an app, or when the state restoration process fails. These checks exist so that your app doesn’t get stuck in an infinite loop of bad states and restoration crashes. Thanks, Apple! :\] +当用户手动杀死应用或状态恢复失败时,状态恢复框架将丢弃任何状态信息。之所以存在这些检查,以避免你的应用不会陷入无线循环的错误状态以及恢复崩溃。谢谢,Apple!:\] -_Note:_ You _cannot_ kill the app yourself via the app switcher, otherwise state restoration simply won’t work. +**注意**:你**无法**通过应用切换器自行终止应用,否则状态恢复将无法正常工作。 -Launch the app again; instead of returning you to the pet detail view, you’re back at the home screen. Looks like you’ll need to add some state restoration logic yourself. +再次启动应用;你将回到主屏幕,而不是宠物详情视图。看起来你需要自己添加一些状态恢复逻辑。 -## Enabling State Restoration +## 实现状态恢复 -The first step in setting up state restoration is to enable it in your app delegate. Open _AppDelegate.swift_ and add the following code: +设置状态恢复的第一步是在你的应用代理中启用它,打开 **AppDelegate.swift** 并添加以下代码: -``` +```swift func application(application: UIApplication, shouldSaveApplicationState coder: NSCoder) -> Bool { return true } @@ -67,153 +67,153 @@ func application(application: UIApplication, shouldRestoreApplicationState coder } ``` -There are five app delegate methods that manage state restoration. Returning `true` in `application(_:shouldSaveApplicationState:)` instructs the system to save the state of your views and view controllers whenever the app is backgrounded. Returning `true` in `application(_:shouldRestoreApplicationState:)` tells the system to attempt to restore the original state when the app restarts. +应用代理中有五个方法来管理状态恢复。返回 `true` 的 `application(_:shouldSaveApplicationState:)`,告诉系统保存 view 的状态,并在应用处于后台运行状态时查看 view controller。返回 `true` 的 `application(_:shouldRestoreApplicationState:)`,告诉系统在应用重新启动时尝试恢复原始状态。 -You can make these delegate methods return `false` in certain scenarios, such as while testing or when the user’s running an older version of your app that can’t be restored. +你可以在某些情况下让这些代理方法返回 `false`,例如在测试时或用户运行的应用的旧版本无法恢复时。 -Build and run your app, and navigate to a cat’s detail view. Press _Cmd+Shift+H_ to background your app, then stop the app from Xcode. You’ll see the following: +构建并运行你的应用,然后导航到猫的详情页。按住组合键 **Cmd + Shift + H** 让你的应用进入后台,然后通过 Xcode 停止应用。你将看到以下内容: [![Pet Finder](https://koenig-media.raywenderlich.com/uploads/2015/11/petfinder_intro_1-281x500.png)](https://koenig-media.raywenderlich.com/uploads/2015/11/petfinder_intro_1.png) [![confused](https://koenig-media.raywenderlich.com/uploads/2015/10/confused-365x320.png)](https://koenig-media.raywenderlich.com/uploads/2015/10/confused.png) -It’s the exact same thing you saw before! Just opting-in to state restoration isn’t quite enough. You’ve enabled preservation and restoration in your app, but the view controllers aren’t yet participating. To remedy this, you’ll need to give each of these scenes a _restoration identifier_. +与你之前看到的完全相同!只选择进行状态恢复还不够。虽然你已在应用中启用了保存和恢复,但 view controller 尚未参与。要解决这个问题,你需要为每个场景提供一个**恢复标识符**。 -## Setting Restoration Identifiers +## 设置恢复标识符 -A restoration identifier is simply a string property of views and view controllers that UIKit uses to restore those objects to their former glory. The actual content of those properties isn’t critical, as long as it’s unique. It’s the presence of a value that communicates to UIKit your desire to preserve this object. +恢复标识符只是一个 view 和 view controller 的字符串属性,UIKit 使用它来将这些对象恢复到之前的状态。它存在一个 UIKit 与你希望保留的对象通讯的值。只要这些属性的值是唯一的,它们的实际内容并不重要。 -Open _Main.storyboard_ and you’ll see a tab bar controller, a navigation controller, and three custom view controllers: +打开 **Main.storyboard**,你将看到一个 tab bar controller、一个 navigation controller 和三个自定义 view controller: [![cinder_storyboard](https://koenig-media.raywenderlich.com/uploads/2015/10/cinder_storyboard-700x350.png)](https://koenig-media.raywenderlich.com/uploads/2015/10/cinder_storyboard.png) -Restoration identifiers can either be set in code or in Interface Builder. To make things easy, in this tutorial you’ll set them in Interface Builder. You _could_ go in and think up a unique name for each view controller, but Interface Builder has a handy option named _Use Storyboard ID_ that lets you use your Storyboard IDs for restoration identifiers as well. +恢复标识符可以在代码中或在 Interface Builder 中设置。简单起见,在本教程中你将在 Interface Builder 中进行设置。你**可以**进入并为每一个 view controller 设置一个唯一的名称,但 Interface Builder 有一个 **Use Storyboard ID** 的快捷选项,它允许你将 Storyboard ID 用于恢复标识符。 -In _Main.storyboard_, click on the tab bar controller and open the Identity Inspector. Enable the _Use Storyboard ID_ option as shown below: +在 **Main.storyboard** 中,单击 tab bar controller 并打开 Identity Inspector。启用 **Use Storyboard ID** 选项,如下所示: [![Use Storyboard ID](https://koenig-media.raywenderlich.com/uploads/2015/10/petfinder_enable_restoration_id-480x320.png)](https://koenig-media.raywenderlich.com/uploads/2015/10/petfinder_enable_restoration_id.png) -This will archive the view controller and restore it during the state restoration process. +这样会把 view controller 行存档记录,并且在状态恢复过程中进行还原。 -Repeat the process for the navigation controller and the three view controllers. Make sure you’ve checked Use Storyboard ID for each of the view controllers, or your app may not restore its state properly. +对 navigation controller 和其它三个 view controller 重复此过程。确保你已经为每个 view controller 选中了 Use Storyboard ID。否则你的应用可能无法正常恢复其状态。 -Note that all the controllers already have a _Storyboard ID_ and the checkbox simply uses the same string that you already have as _Storyboard ID_. If you are not using _Storyboard ID_s, you need to manually enter a unique _Restoration ID_. +请注意,所有 controller 都已经具有 **Storyboard ID**,并且该复选框仅使用已作为 **Storyboard ID** 的相同字符串。如果你未使用 **Storyboard ID**,你需要手动输入一个唯一的 **Storyboard ID**。 -Restoration identifiers come together to make _restoration paths_ that form a unique path to any view controller in your app; it’s analagous to URIs in an API, where a unique path identifies a unique path to each resource. +恢复标识符汇集在一起,通过应用中任何 view controller 的唯一路径形成**恢复路径**;它与 API 中的 URI 类似,其中唯一路径标识每个资源的唯一路径。 -For example, the following path represents _MatchedPetsCollectionViewController_: +比如,以下路径代表 **MatchedPetsCollectionViewController**: -_RootTabBarController/NavigationController/MatchedPetsCollectionViewController_ +**RootTabBarController/NavigationController/MatchedPetsCollectionViewController** -With this bit of functionality, the app will remember which view controller you left off on (for the most part), and any UIKit views will retain their previous state. +通过这些设置,应用将记住你停止使用时的 view controller(大多数情况下),并且任何 UIKit view 都将保留其先前的状态。 -Build and run your app; test the flow for restoration back to the pet details view. Once you pause and restore your app, you should see the following: +构建并运行你的应用;返回宠物详情页测试恢复流程。暂停和恢复应用后,你应该看到以下内容: [![No Data](https://koenig-media.raywenderlich.com/uploads/2015/10/restoredNoData1-281x500.png)](https://koenig-media.raywenderlich.com/uploads/2015/10/restoredNoData1.png) -Although the system restores the correct view controller, it appears to be missing the cat object that’s needed to populate the view. How do you restore the view controller _and_ the objects it needs? +虽然系统恢复了正确的 view controller,但它似乎缺少填充 view 所需的猫对象。如何恢复 view controller 及其所需的对象呢? -## The UIStateRestoring Protocol +## UIStateRestoring 协议 -When it comes to implementing state restoration, UIKit does a lot for you, but your app is responsible for a few things on its own: +在实现状态恢复方面,UIKit 为你做了很多工作,但是你的应用需要负责自行处理一些事情: -1. Telling UIKit it wants to participate in state restoration, which you did in your app delegate. -2. Telling UIKit which view controllers and views should be preserved and restored. You took care of this by assigning restoration identifiers to your view controllers. -3. Encoding and decoding any relevant data necessary to reconstruct the view controller to its previous state. You haven’t done this yet, but that’s where the `UIStateRestoring` protocol comes in. +1. 告诉 UIKit 它想参与状态恢复,就是你在应用代理中所做的那些。 +2. 告诉 UIKit 应该保留和恢复哪些 view controller 和 view。你通过为 view controller 分配恢复标识符来解决此问题。 +3. 编码和解码任何需要重建 view controller 之前状态的相关数据。你还没有这样做,但这是 `UIStateRestoring` 协议需要解决的问题。 -Every view controller with a restoration identifier will receive a call to `encodeRestorableStateWithCoder(_:)` of the `UIStateRestoring` protocol when the app is saved. Additionally, the view controller will receive a call to `decodeRestorableStateWithCoder(_:)` when the app is restored. +每个具有恢复标识符的 view controller 都将在保存应用时接收 `UIStateRestoring` 协议对 `encodeRestorableStateWithCoder(_:)` 的调用。另外,view controller 将在应用恢复时接收 `decodeRestorableStateWithCoder(_:)` 的调用。 -To complete the restoration flow, you need to add logic to encode and decode your view controllers. While this part of the process is probably the most time-consuming, the concepts are relatively straightforward. You’d usually write an extension to add conformance to a protocol, but UIKit automatically registers view controllers to conform to `UIStateRestoring` — you merely need to override the appropriate methods. +要完成恢复流程,你需要添加对 view controller 进行编码和解码的逻辑。虽然该过程可能是最耗时的,但概念相对简单。你通常会编写一个扩展来增加协议的一致性,但是 UIKit 会自动关注册 view controller 以符合 `UIStateRestoring` - 你只需要覆盖适当的方法。 -Open _PetDetailsViewController.swift_ and add the following code to the end of the class: +打开 **PetDetailsViewController.swift**,并在类的末尾添加以下代码: -``` +```swift override func encodeRestorableStateWithCoder(coder: NSCoder) { //1 if let petId = petId { coder.encodeInteger(petId, forKey: "petId") } - + //2 super.encodeRestorableStateWithCoder(coder) } ``` -Here’s what’s going on in the code above: +以下是上述代码要做的事: -1. If an ID exists for your current cat, save it using the provided encoder so you can retrieve it later. -2. Make sure to call `super` so the rest of the inherited state restoration functionality will happen as expected. +1. 如果当前猫对象存在 ID,使用提供的编码器进行保存以便稍后检索。 +2. 确保调用 `super` 以便继承的状态恢复功能的其它部分能够按照预期发生。 -With these few changes, your app now saves the current cat’s information. Note that you didn’t actually save the cat model object, but rather the ID you can use later to get the cat object. You use this same concept when saving your selected cats in `MatchedPetsCollectionViewController`. +通过少量的修改,现在你的应用可以保存当前猫的信息。但请注意,你实际上并未保存猫的模型对象,而是稍后可用于获取猫对象的 ID,当你保存通过 `MatchedPetsCollectionViewController` 选择的猫时,可以使用相同的概念。 -Apple is quite clear that state restoration is _only_ for archiving information needed to create view hierarchies and return the app to its original state. It’s tempting to use the provided coders to save and restore simple model data whenever the app goes into the background, but iOS discards all archived data any time state restoration fails or the user kills the app. Since your user won’t be terribly happy to start back at square one each time they restart the app, it’s best to follow Apple’s advice and only save state restoration using this tactic. +Apple 非常清楚,状态恢复**仅**用于存档创建 view 层次结构所需并将应用恢复到其原始状态的信息。每当应用进入后台时,使用提供的编码器来保存和恢复简单模型数据是很诱人的,但是只要状态恢复失败或用户杀死应用,iOS 将会丢弃所有存档数据。由于你的用户每次重新启动应用时都不会非常乐意回到起始页,所以最好遵循 Apple 的建议并仅使用此策略保存状态恢复。 -Now that you’ve implemented encoding in _PetDetailsViewController.swift_, you can add the corresponding decoding method below: +现在你已经在 **PetDetailsViewController.swift** 中实现了编码,你可以在下面添加相应的解码方法: -``` +```swift override func decodeRestorableStateWithCoder(coder: NSCoder) { petId = coder.decodeIntegerForKey("petId") - + super.decodeRestorableStateWithCoder(coder) } ``` -Here you decode the ID and set it back to the view controller’s `petId` property. +解密 ID 并将其设置回 view controller 的 `petId` 属性。 -The `UIStateRestoring` protocol provides `applicationFinishedRestoringState()` for additional configuration steps once you’ve decoded your view controller’s objects. +一旦解码了 view controller 的对象,该 `UIStateRestoring` 协议就会提供 `applicationFinishedRestoringState()` 的其他配置步骤。 -Add the following code to _PetDetailsViewController.swift_: +在 **PetDetailsViewController.swift** 中添加以下代码: -``` +```swift override func applicationFinishedRestoringState() { guard let petId = petId else { return } currentPet = MatchedPetsManager.sharedManager.petForId(petId) } ``` -This sets up the current pet based on the decoded pet ID and completes the restoration of the view controller. You could, of course, do this in `decodeRestorableStateWithCoder(_:)`, but it’s best to keep the logic separate since it can get unwieldy when it’s all bundled together. +上面是基于解码后的宠物 ID 设置当前宠物,并完成 view controller 的恢复。当然,你可以在 `decodeRestorableStateWithCoder(_:)` 执行此操作,但最好保持逻辑分离,因为当它们全部捆绑在一起时它将变得笨拙。 -Build and run your app; navigate to a pet’s detail view and trigger the save sequence by backgrounding the app then killing it via Xcode. Re-launch the app and verify that your same furry friend appears as expected: +构建并运行你的应用;导航到宠物的详情页并让应用置于后台,然后通过 Xcode 杀死该应用以触发保存序列。重启应用并验证你的毛茸茸玩具是否按预期显示: [![Details](https://koenig-media.raywenderlich.com/uploads/2015/10/petfinder_details_1-282x500.png)](https://koenig-media.raywenderlich.com/uploads/2015/10/petfinder_details_1.png) -You’ve learned how to restore view controllers created via storyboards. But what about view controllers that you create in code? To restore storyboard-created views at run-time, all UIKit has to do is find them in the main storyboard. Fortunately, it’s almost as easy to restore code-based view controllers. +你已经学习了如何恢复通过 storyboard 创建的 view controller。但你在代码中创建的 view controller 应该如何处理呢?要在运行时恢复基于 storyboard 创建的 view controller,UIKit 要做的是在 main storyboard 中找到它们。幸运的是,恢复基于代码创建的 view controller 几乎一样容易。 -## Restoring Code-based View Controllers +## 恢复基于代码创建的 view controller -The view controller `PetEditViewController` is created entirely from code; it’s used to edit a cat’s name and age. You’ll use this to learn how to restore code-created controllers. +视图控制器 `PetEditViewController` 完全由代码创建;它用于编辑猫的名字和年龄。你将使用它来学习如何恢复基于代码创建的 view controller。 -Build and run your app; navigate to a cat’s detail view then click _Edit_. Modify the cat’s name but don’t save your change, like so: +构建并运行你的应用;导航到猫的详情页,然后点击编辑。修改猫的名字,但不保存你的更改,如下所示: [![Edit](https://koenig-media.raywenderlich.com/uploads/2015/10/petfinder_edit_2-281x500.png)](https://koenig-media.raywenderlich.com/uploads/2015/10/petfinder_edit_2.png) -Now background the app and kill it via Xcode to trigger the save sequence. Re-launch the app, and iOS will return you to the pet detail view instead of the edit view: +现在将应用置于后台并通过 Xcode 杀死它以触发保存序列。重启应用,iOS 将返回宠详情页而不是编辑页: [![Details](https://koenig-media.raywenderlich.com/uploads/2015/10/petfinder_details_1-282x500.png)](https://koenig-media.raywenderlich.com/uploads/2015/10/petfinder_details_1.png) -Just as you did for the view controllers built in Interface Builder, you’ll need to provide a restoration ID for the view controller and add the encode and decode `UIStateRestoring` protocol methods to properly restore the state. +正如你在 Interface Builder 中构建的 view controller 所做的那样,你需要为 view controller 提供恢复 ID,并添加 `UIStateRestoring` 协议中的编码和解码方法以便正确恢复状态。 -Take a look at _PetEditViewController.swift_; you’ll notice the encode and decode methods already exist. The logic is similar to the encode and decode methods you implemented in the last section, but with a few extra properties. +查看 **PetEditViewController.swift**;你会注意到编码和解码的方法已经存在。逻辑类似于你在上一节中实现的编码和解码方法,但它还具有一些额外的属性。 -It’s a straightforward process to assign the restoration identifier manually. Add the following to `viewDidLoad()` right after the call to `super`: +手动分配恢复标识符是一个简单的过程。在 `viewDidLoad()` 中调用 `super` 后立即添加以下内容: -``` +```swift restorationIdentifier = "PetEditViewController" ``` -This assigns a unique ID to the `restorationIdentifier` view controller property. +这会为 `restorationIdentifier` 视图控制器分配唯一 ID。 -During the state restoration process, UIKit needs to know where to get the view controller reference. Add the following code just below the point where you assign `restorationIdentifier`: +在状态恢复过程中,UIKit 需要知道从何处获得 view controller 引用。在你设置 `restorationIdentifier` 的下面添加以下代码: -``` +```swift restorationClass = PetEditViewController.self ``` -This sets up `PetEditViewController` as the restoration class responsible for instantiating the view controller. Restoration classes must adopt the `UIViewControllerRestoration` protocol and implement the required restoration method. To that end, add the following extension to the end of _PetEditViewController.swift_: +这将设置 `PetEditViewController` 为负责实例化 view controller 的恢复类。恢复类必须采用 UIViewControllerRestoration 协议并实现所需的恢复方法。为此,将以下扩展代码添加到 **PetEditViewController.swift** 的末尾: -``` +```swift extension PetEditViewController: UIViewControllerRestoration { - static func viewControllerWithRestorationIdentifierPath(identifierComponents: [AnyObject], + static func viewControllerWithRestorationIdentifierPath(identifierComponents: [AnyObject], coder: NSCoder) -> UIViewController? { let vc = PetEditViewController() return vc @@ -221,19 +221,22 @@ extension PetEditViewController: UIViewControllerRestoration { } ``` -This implements the required `UIViewControllerRestoration` protocol method to return an instance of the class. Now that UIKit has a copy of the object it’s looking for, iOS can call the encode and decode methods and restore the state. +这实现了返回类实例所需的 `UIViewControllerRestoration` 协议方法。现在 UIKit 有了它正在寻找的对象的副本,iOS 可以调用编码和解码方法并恢复状态。 -Build and run your app; navigate to a cat’s edit view. Change the cat’s name as you did before, but don’t save your change, then background the app and kill it via Xcode. Re-launch your app and verify all the work you did to come up with a great unique name for your furry friend was not all in vain! +构建并运行你的应用;导航到猫的编辑页。像之前一样更改猫的名字,但不保存更改,然后将应用置于后台并通过 Xcode 将其删除。重启你的应用,并验证你所做的所有工作,为你的毛茸茸朋友提出一个伟大的独特名称并非都是徒劳的! [![Edit](https://koenig-media.raywenderlich.com/uploads/2015/10/petfinder_edit_2-281x500.png)](https://koenig-media.raywenderlich.com/uploads/2015/10/petfinder_edit_2.png) -## Where to Go From Here? +## 接下来去哪儿? + +你可以 [在此处现在已完成的项目](https://koenig-media.raywenderlich.com/uploads/2016/01/PetFinder-Completed-1.zip)。状态恢复框架是任何 iOS 开发人员工具包中非常有用的工具;你现在可以将基本恢复代码添加到任何应用,并以此提高你的用户体验。 -You can [download the completed project here](https://koenig-media.raywenderlich.com/uploads/2016/01/PetFinder-Completed-1.zip). The state restoration framework is an extremely useful tool in any iOS developers toolkit; you now have the knowledge to add basic restoration code to any app and improve your user experience just a little more. +有关使用该框架可能实现的更多信息,请查看 [2012 年](https://developer.apple.com/videos/play/wwdc2012-208/) 和 [2013 年](https://developer.apple.com/videos/play/wwdc2013-222/)的 WWDC 视频。2013年的演示文稿特别有用,因为它涵盖了 iOS 7 中引入的恢复概念,比如用于保存和恢复任意对象的 `UIObjectRestoration` 和在需求更复杂的应用中恢复表和集合视图的 `UIDataSourceModelAssociation`。 -For further information on what’s possible with this framework, check out the [WWDC videos from 2012](https://developer.apple.com/videos/play/wwdc2012-208/) and [2013](https://developer.apple.com/videos/play/wwdc2013-222/). The 2013 presentation is especially useful since it covers restoration concepts introduced in iOS 7 such as `UIObjectRestoration` for saving and restoring arbitrary objects and `UIDataSourceModelAssociation` for restoring table and collection views in apps with more complicated needs. +如果你对本教程有任何疑问或建议,请加入以下论坛讨论! -If you have any questions or comments about this tutorial, please join the forum discussion below! +* [其他核心 API](https://www.raywenderlich.com/library?domain_ids%5B%5D=1&category_ids%5B%5D=152&sort_order=released_at) +* [iOS 和 Swift 手册](https://www.raywenderlich.com/library?domain_ids%5B%5D=1&sort_order=released_at) > 如果发现译文存在错误或其他需要改进的地方,欢迎到 [掘金翻译计划](https://github.com/xitu/gold-miner) 对译文进行修改并 PR,也可获得相应奖励积分。文章开头的 **本文永久链接** 即为本文在 GitHub 上的 MarkDown 链接。 From a26a603a49e3e5aa4039ef0b489ec19c1aa00bd0 Mon Sep 17 00:00:00 2001 From: walter Date: Mon, 7 Jan 2019 15:48:35 +0800 Subject: [PATCH 26/54] =?UTF-8?q?Flutter=20=E4=BB=8E=200=20=E5=88=B0=201?= =?UTF-8?q?=20(#4923)?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * update * update * update * update * update * update * update * Update zero-to-one-with-flutter.md --- TODO1/zero-to-one-with-flutter.md | 100 +++++++++++++++--------------- 1 file changed, 50 insertions(+), 50 deletions(-) diff --git a/TODO1/zero-to-one-with-flutter.md b/TODO1/zero-to-one-with-flutter.md index 88c28400758..113cf1a3dd1 100644 --- a/TODO1/zero-to-one-with-flutter.md +++ b/TODO1/zero-to-one-with-flutter.md @@ -2,44 +2,44 @@ > * 原文作者:[Mikkel Ravn](https://medium.com/@mravn?source=post_header_lockup) > * 译文出自:[掘金翻译计划](https://github.com/xitu/gold-miner) > * 本文永久链接:[https://github.com/xitu/gold-miner/blob/master/TODO1/zero-to-one-with-flutter.md](https://github.com/xitu/gold-miner/blob/master/TODO1/zero-to-one-with-flutter.md) -> * 译者: +> * 译者:[hongruqi](https://github.com/hongruqi) > * 校对者: -# Zero to One with Flutter +# Flutter 从 0 到 1 -_It was late summer 2016, and my first task as a new hire at the Google office in Aarhus, Denmark was to implement animated charts in an Android/iOS app using_ [_Flutter_](https://flutter.io) _and_ [_Dart_](https://www.dartlang.org)_. Besides being a “Noogler”, I was new to Flutter, new to Dart, and new to animations. In fact, I had never done a mobile app before. My very first smartphone was just a few months old — bought in a fit of panic that I might fail the phone interview by answering the call with my old Nokia..._ - -_I did have some prior experience with charts from desktop Java, but that wasn’t animated. I felt… weird. Partly a dinosaur, partly reborn._ +2016 年夏末,丹麦奥古斯谷歌办公室。我来谷歌的第一个任务,是使用 [_Flutter_](https://flutter.io) 和 [_Dart_](https://www.dartlang.org) 在 Android/iOS 应用程序中实现动画图表。除了是一个谷歌新人之外,我对 Flutter,Dart,动画都不熟悉。事实上,我之前从未做过移动应用程序。我的第一部智能手机也只有几个月的历史——我是在一阵恐慌中买的,因为担心使用我的老诺基亚可能会导致电话面试失败... +我确实对桌面Java中的图表有过一些经验,但哪些图表并不是动画的。我感到...不可思议。部分是恐龙,部分重生. ![](https://cdn-images-1.medium.com/max/800/1*2t8GffL0BcNoGLU-IgHT9w.jpeg) -**TL;DR** Discovering the strength of Flutter’s widget and tween concepts by writing chart animations in Dart for an Android/iOS app. +**长话短说** 我发现 Flutter 的 widget 和 tween 的强大之处,在使用 Dart 开发 Android/iOS 应用程序的图表动画过程中。 -Updated on August 7, 2018 to use Dart 2 syntax. [GitHub repo](https://github.com/mravn/charts) added on October 17, 2018. Each step described below is a separate commit. +2018 年 8 月 7 日更新,适配 Dart 2 语法。[GitHub repo](https://github.com/mravn/charts)在 2018 年 10 月 17 日添加。下面的描述每步都是一个单独提交。 * * * -Moving to a new development stack makes you aware of your priorities. Near the top of my list are these three: +迁移到新的开发栈可以让您了解自己对技术的优先级。在我的清单中排在前三位的是: + +* **强大的概念**通过提供简单的,相关的构造方法,逻辑或数据,从而有效地处理复杂度。 +* **清晰的代码**让我们可以清晰地表达概念,不被语言陷阱、过多的引用或者辅助细节所干扰。。 +* **快速迭代**是实验和学习的关键 - 软件开发团队以学习为生:需求到底是什么,以及如何通过最优的代码实现它。 -* **Strong concepts** deal effectively with complexity by providing simple, relevant ways of structuring thoughts, logic, or data. -* **Clear code** lets us express those concepts cleanly, without being distracted by language pitfalls, excessive boilerplate, or auxiliary detail. -* **Fast iteration** is key to experimentation and learning — and software development teams learn for a living: what the requirements really are, and how best to fulfill them with concepts expressed in code. -Flutter is a new platform for developing Android and iOS apps from a single codebase, written in Dart. Since our requirements spoke of a fairly complex UI including animated charts, the idea of building it only once seemed very attractive. My tasks involved exercising Flutter’s CLI tools, some pre-built widgets, and its 2D rendering engine — in addition to writing a lot of plain Dart code to model and animate charts. I’ll share below some conceptual highlights of my learning experience, and provide a starting point for your own evaluation of the Flutter/Dart stack. +Flutter 是用 Dart 实现,可以用一套代码同时构建 Android 和 iOS 应用的新平台。由于我们的需求涉及到一个相当复杂的 UI,包括动画图表,所以只构建一次的想法似乎非常有吸引力。我的任务包括使用 Flutter 的 CLI 工具,一些预先构建的 Widgets 及其 2D 渲染引擎。除了编写大量 Dart 代码来构建模型和动画图表外。我将在下面分享一些重点概念,并为您自己评估 Flutter/Dart 技术栈提供一个参考。 ![](https://cdn-images-1.medium.com/max/800/1*OKV3RzTg89W3VxXnpAH3Eg.gif) -A simple animated bar chart, captured from an iOS simulator during development +一个简单的动画条形图,在开发过程中从 iOS 模拟器获取 -This is part one of a [two-part](https://medium.com/dartlang/zero-to-one-with-flutter-part-two-5aa2f06655cb) introduction to Flutter and its ‘widget’ and ‘tween’ concepts. I’ll illustrate the strength of these concepts by using them to display and animate charts like the one shown above. Full code samples should provide an impression of the level of code clarity achievable with Dart. And I’ll include enough detail that you should be able to follow along on your own laptop (and emulator or device), and experience the length of the Flutter development cycle. +这是 Flutter 及其 “widgets” 和 “tween” 概念介绍的[两部分](https://medium.com/dartlang/zero-to-one-with-flutter-part-two-5aa2f06655cb)中的第一部分。我将通过使用它们实现显示动画(如上图所示的图表)来说明这些概念的强大之处。完整的代码示例将给你 Dart 代码能清晰表达问题的印象。我将包含足够的细节,您应该能够在自己的笔记本电脑(以及模拟器或设备)上进行操作,并体验 Flutter 开发周期的长度。 -The starting point is a fresh [installation of Flutter](https://flutter.io/setup). Run +首先,[安装 Flutter](https://flutter.io/setup),完成之后在终端运行。 ``` $ flutter doctor ``` -to check the setup: +检查设置: ``` $ flutter doctor @@ -56,13 +56,13 @@ Doctor summary (to see all details, run flutter doctor -v): • No issues found! ``` -With enough check marks, you can create a Flutter app. Let’s call it `charts`: +以上复选框都满足了,您将可以创建一个 Flutter 应用程序了。我们命名它为 charts: ``` $ flutter create charts ``` -That should give you a directory of the same name: +目录结构: ``` charts @@ -72,19 +72,19 @@ charts main.dart ``` -About sixty files have been generated, making up a complete sample app that can be installed on both Android and iOS. We’ll do all our coding in `main.dart` and sibling files, with no pressing need to touch any of the other files or directories. +大约生成 60 个文件,组成一个可以安装在 Android 和 iOS 上的完整示例程序。我们将在 `main.dart` 和它的同级文件中完成所有编码,而不需要触及任何其他文件或目录。 -You should verify that you can launch the sample app. Start an emulator or plug in a device, then execute +您应该验证是否可以启动示例程序。 启动模拟器或插入设备,然后在 `charts` 目录下,执行 ``` $ flutter run ``` -in the `charts` directory. You should then see a simple counting app on your emulator or device. It uses Material Design widgets, which is nice, but optional. As the top-most layer of the Flutter architecture, those widgets are completely replaceable. +您应该在模拟器或设备上看到一个简单的计数应用程序。 它默认使用 MD 风格的 widgets,但这是可选的。作为 Flutter 架构的最顶层,这些 widgets 是完全可替换的。 * * * -Let’s start by replacing the contents of `main.dart` with the code below, a simple starting point for playing with chart animations. +让我们首先用下面的代码替换 `main.dart` 的内容,作为玩转图表动画的简单起点。 ``` import 'dart:math'; @@ -125,19 +125,19 @@ class ChartPageState extends State { } ``` -Save the changes, then restart the app. You can do that from the terminal, by pressing `R`. This ‘full restart’ operation throws away the application state, then rebuilds the UI. For situations where the existing application state still makes sense after the code change, one can press `r` to do a ‘hot reload’, which only rebuilds the UI. There is also a Flutter plugin for IntelliJ IDEA providing the same functionality integrated with a Dart editor: +保存更改,然后重新启动应用程序。您可以通过按 “R” 从终端执行此操作。这种“完全重启”操作会重置应用程序状态,然后重建 UI。对于在代码更改后,现有应用程序状态仍然有效的情况,可以按 “r” 执行“热重载”,这只会重建 UI。IntelliJ IDEA 安装 Flutter 插件,它提供了集成 Dart 编辑器相同的功能: ![](https://cdn-images-1.medium.com/max/800/1*soCdZ19Qugtv1YJewMQZGg.png) -Screen shot from IntelliJ IDEA with an older version of the Flutter plug-in, showing the reload and restart buttons in the top-right corner. These buttons are enabled, if the app has been started from within the IDE. Newer versions of the plugin do hot reload on save. +屏幕截图来自 IntelliJ IDEA,带有旧版本的 Flutter 插件,显示右上角的重新加载和重启按钮。如果已在 IDE 中启动应用程序,则启用这些按钮。较新版本的插件会在保存时进行热重载。 -Once restarted, the app shows a centered text label saying `Data set: null` and a floating action button to refresh the data. Yes, humble beginnings. +重新启动后,应用程序会显示一个居中的文本标签,上面写着 “Data set:null” 和一个浮动操作按钮来刷新数据。 -To get a feel for the difference between hot reload and full restart, try the following: After you’ve pressed the floating action button a few times, make a note of the current data set number, then replace `Icons.refresh` with `Icons.add` in the code, save, and do a hot reload. Observe that the button changes, but that the application state is retained; we’re still at the same place in the random stream of numbers. Now undo the icon change, save, and do a full restart. The application state has been reset, and we’re back to `Data set: null`. +要了解热重载和完全重启之间的区别,请尝试以下操作:按几次浮动操作按钮后,记下当前数据集编号,然后将代码中的 Icons.refresh 改为 Icons.add,保存并执行热重载。观察按钮已经改变,但程序的状态仍然保留; 我们仍然在文本上显示获取的随机数。现在撤消 Icon 更改,保存并完全重新启动。应用程序状态已重置,文本标签显示最初状态 “Data set:null”。 -Our simple app shows two central aspects of the Flutter widget concept in action: +我们简单的应用程序显示了 Flutter Widget 两个核心方面: -* The user interface is defined by a tree of **immutable widgets** which is built via a foxtrot of constructor calls (where you get to configure widgets) and `build` methods (where widget implementations get to decide how their sub-trees look). The resulting tree structure for our app is shown below, with the main role of each widget in parentheses. As you can see, while the widget concept is quite broad, each concrete widget type typically has a very focused responsibility. +* 用户界面由**不可变的 widgets** 树定义,它是通过调用构造函数(你可以在其中配置 widgets)和 `build` 方法构建的(其中 widget 可以决定子树的外观)。我们的应用程序生成的树结构如下所示,每个 widget 的主要内容都在括号中。 正如您所看到的,虽然 widget 概念非常广泛,但每个具体 widget 类型通常都具有非常集中的职责。 ``` MaterialApp (navigation) @@ -149,13 +149,13 @@ MaterialApp (navigation) Icon (graphics) ``` -* With an immutable tree of immutable widgets defining the user interface, the only way to change that interface is to rebuild the tree. Flutter takes care of that, when the next frame is due. All we have to do is tell Flutter that some state on which a subtree depends has changed. The root of such a **state-dependent subtree** must be a `StatefulWidget`. Like any decent widget, a `StatefulWidget` is not mutable, but its subtree is built by a `State` object which is. Flutter retains `State` objects across tree rebuilds and attaches each to their respective widget in the new tree during building. They then determine how that widget’s subtree is built. In our app, `ChartPage` is a `StatefulWidget` with `ChartPageState` as its `State`. Whenever the user presses the button, we execute some code to change `ChartPageState.` We’ve demarcated the change with `setState` so that Flutter can do its housekeeping and schedule the widget tree for rebuilding. When that happens, `ChartPageState` will build a slightly different subtree rooted at the new instance of `ChartPage`. +* 使用不可变 widget 的不可变树定义用户界面,更改该界面的唯一方法是重建 widget 树。当下一帧到期时,Flutter 会处理这个问题。我们所要做的就是告诉 Flutter 一个子树所依赖的状态已经改变了。这种**状态依赖子树**的根必须是`StatefulWidget`。像任何 widget 一样,`StatefulWidget` 是不可变的,但是它的子树是由 `State` 对象构建的。Flutter 在树重建期间保留 “State” 对象,并在构建期间将每个对象附加到新树中的各自 widget 上。然后,他们决定 widget 的子树是如何构建的。在我们的应用程序中,`ChartPage` 是一个 `StatefulWidget`,`ChartPageState` 作为它的 `State`。每当用户按下按钮时,我们执行一些代码来改变 `ChartPageState`。我们用 `setState` 界定变化,以便 Flutter 可以进行内部处理并安排widget树进行重建。当发生这种情况时,`ChartPageState` 将构建一个稍微不同的子树,该子树以新的 `ChartPage` 实例为根。 -Immutable widgets and state-dependent subtrees are the main tools that Flutter puts at our disposal to address the complexities of state management in elaborate UIs responding to asynchronous events such as button presses, timer ticks, or incoming data. From my desktop experience I’d say this complexity is _very_ real. Assessing the strength of Flutter’s approach is — and should be — an exercise for the reader: try it out on something non-trivial. +不可变 widget 和状态相关子树是 Flutter,为了解决UI异步响应事件,如按钮按下,计时器滴答或传入数据这样复杂的状态管理,而提供的主要工具。 从我的桌面应用开发经验来看,我会说这种复杂性是非常真实的。评估 Flutter 的优势,应该是读者去实践它:尝试一些非平凡的事情。 * * * -Our charts app will stay simple in terms of widget structure, but we’ll do a bit of animated custom graphics. First step is to replace the textual representation of each data set with a very simple chart. Since a data set currently involves only a single number in the interval `0..100`, the chart will be a bar chart with a single bar, whose height is determined by that number. We’ll use an initial value of `50` to avoid a `null` height: +我们的图表应用程序将在 widget 结构方面保持简单,但我们会做一些自定义视图动画。第一步是用非常简单的图表替换每个数据集的文本表示。由于数据集当前只涉及区间 “0..100” 中的单个数字,因此图表将是带有单个条形的条形图,其高度由该数字决定。我们将使用初始值 “50” 来避免 “null” 高度: ``` import 'dart:math'; @@ -226,11 +226,11 @@ class BarChartPainter extends CustomPainter { } ``` -`CustomPaint` is a widget that delegates painting to a `CustomPainter` strategy. Our implementation of that strategy draws a single bar. +`CustomPaint` 是一个widget,它将绘画委托给 `CustomPainter`,执行后只画出一个条形图。 -Next step is to add animation. Whenever the data set changes, we want the bar to change height smoothly rather than abruptly. Flutter has an `AnimationController` concept for orchestrating animations, and by registering a listener, we’re told when the animation value — a double running from zero to one — changes. Whenever that happens, we can call `setState` as before and update `ChartPageState`. +下一步是添加动画。每当数据集发生变化时,我们都希望条图形平滑而不是突然地改变高度。Flutter 有一个用于编排动画的`AnimationController` 类,通过注册一个监听器,我们被告知动画值(从 0 到 1 的 double 值)何时发生变化。每当发生这种情况时,我们可以像以前一样调用 `setState` 并更新 `ChartPageState`。 -For reasons of exposition, our first go at this will be ugly: +出于解释的原因,我们首先做一个简单的事例: ``` import 'dart:math'; @@ -337,17 +337,17 @@ class BarChartPainter extends CustomPainter { } ``` -Ouch. Complexity already rears its ugly head, and our data set is still just a single number! The code needed to set up animation control is a minor concern, as it doesn’t ramify when we get more chart data. The real problem is the variables `startHeight`, `currentHeight`, and `endHeight` which reflect the changes made to the data set and the animation value, and are updated in three different places. +复杂性已经让人头疼,尽管我们的数据集只是一个数字!设置动画控件所需的代码是一个次要问题,因为当我们获得更多图表数据时,它不会产生分支。真正的问题是变量 `startHeight`,`currentHeight` 和 `endHeight`,它们反映了对数据集和动画值所做的更改,并在三个不同的地方进行了更新。 -We are in need of a concept to deal with this mess. +我们需要一个概念来处理这个烂摊子。 * * * -Enter **tweens**. While far from unique to Flutter, they are a delightfully simple concept for structuring animation code. Their main contribution is to replace the imperative approach above with a functional one. A tween is a _value_. It describes the path taken between two points in a space of other values, like bar charts, as the animation value runs from zero to one. +**tweens**,虽然远非Flutter独有,但它们是构造动画代码的一个非常简单的概念。他们的主要贡献是用函数试方法取代上面的命令式方法。tween 是一个值。它描述了空间中的两个点之间的路径,如条形图一样,动画值从 0 到 1 运行。 ![](https://cdn-images-1.medium.com/max/800/1*3KpUQjhZLrvwvjF0daKg9g.jpeg) -Tweens are generic in the type of these other values, and can be expressed in Dart as objects of the type `Tween`: +Tweens 是通用的,并且可以在 Dart 中表示为 “Tween ” 类型的对象: ``` abstract class Tween { @@ -360,11 +360,11 @@ abstract class Tween { } ``` -The jargon `lerp` comes from the field of computer graphics and is short for both _linear interpolation_ (as a noun) and _linearly interpolate_ (as a verb). The parameter `t` is the animation value, and a tween should thus lerp from `begin` (when `t` is zero) to `end` (when `t` is one). +专业术语 `lerp` 来自计算机图形学领域,是 linear interpolation(作为名词)和 linearly interpolate(作为动词)的缩写。参数 `t` 是动画值,tween 应该从 `begin`(当 `t` 为零时)到 `end`(当 `t` 为 1 时)。 -The Flutter SDK’s `[Tween](https://docs.flutter.io/flutter/animation/Tween-class.html)` class is very similar to the above, but is a concrete class that supports mutating `begin` and `end`. I’m not entirely sure why that choice was made, but there are probably good reasons for it in areas of the SDK’s animation support that I have yet to explore. In the following, I’ll use the Flutter `Tween`, but pretend it is immutable. +Flutter SDK 的 `[Tween ](https://docs.flutter.io/flutter/animation/Tween-class.html)` 类与上面相似,但它支持 `begin` 和 `end` 突变。我不完全确定为什么会做出这样的选择,但是在 SDK 动画支持方面可能有很好的理由,这里我还没深入探索。在下面,我将使用 Flutter`Tween `,假装它是不可变的。 -We can clean up our code using a single `Tween` for the bar height: +我们可以使用 “Tween” 来代替代码中的条形图高度 barHeight: ``` import 'dart:math'; @@ -463,15 +463,15 @@ class BarChartPainter extends CustomPainter { } ``` -We’re using `Tween` for packaging the bar height animation end-points in a single value. It interfaces neatly with the `AnimationController` and `CustomPainter`, avoiding widget tree rebuilds during animation as the Flutter infrastructure now marks `CustomPaint` for repaint at each animation tick, rather than marking the whole `ChartPage` subtree for rebuild, relayout, and repaint. These are definite improvements. But there’s more to the tween concept; it offers _structure_ to organize our thoughts and code, and we haven’t really taken that seriously. The tween concept says, +我们使用 `Tween` 将条形图高度动画端点打包在一个值中。它与 `AnimationController` 和 `CustomPainter` 灵活的交换,避免了动画期间的 widgets 树重建。Flutter 基础架构现在标记 `CustomPaint` 用于在每个动画刻度处重绘,而不是标记整个 `ChartPage` 子树用于重建,重新布局和重绘。这些都是明确的改进。但 tween 概念还有更多内容; 它提供 _structure_ 来组织我们的想法和代码,但我们不用特意关注这些。Tween 动画描述, -_Animate_ `_T_`_s by tracing out a path in the space of all_ `_T_`_s as the animation value runs from zero to one. Model the path with a_ `_Tween_`_._ +动画值从0到1运动时,通过遍历空间路径中所有 ` _T_` 的路径进行动画。用 ` _Tween _` 对路径建模。 -In the code above, `T` is a `double`, but we do not want to animate `double`s, we want to animate bar charts! Well, OK, single bars for now, but the concept is strong, and it scales, if we let it. +在上面的代码中,`T` 是一个 `double`,但我们不想动画是 `double`,我们想要制作条形图的动画!嗯,好的,现在是单独条形图,但概念很强,如果我们有需要,可以扩展它。 -(You may be wondering why we don’t take that argument a step further and insist on animating data sets rather than their representations as bar charts. That’s because data sets — in contrast to bar charts which are graphical objects — generally do not inhabit spaces where smooth paths exist. Data sets for bar charts typically involve numerical data mapped against discrete data categories. But without the spatial representation as bar charts, there is no reasonable notion of a smooth path between two data sets involving different categories.) +(你可能想知道,为什么我们不进一步讨论这个问题,并且坚持数据集动画化,而不是将其表示为条形图。这是因为数据集与条形图不同,条形图是图形对象。通常不会占据平滑路径存在的空间。条形图的数据集通常涉及映射到离散数据类的数字数据。但如果没有条形图的空间表示,则涉及不同类别的两个数据集之间没有合理的平滑路径概念。) -Returning to our code, we’ll need a `Bar` type and a `BarTween` to animate it. Let’s extract the bar-related classes into their own `bar.dart` file next to `main.dart`: +回到我们的代码,我们需要一个 `Bar` 类型和一个 `BarTween` 来为它设置动画。让我们将与 bar 相关的类提取到 `main.dart` 旁边的 `bar.dart` 文件中: ``` import 'dart:ui' show lerpDouble; @@ -527,9 +527,9 @@ class BarChartPainter extends CustomPainter { } ``` -I’m following a Flutter SDK convention here in defining `BarTween.lerp` in terms of a static method on the `Bar` class. This works well for simple types like `Bar`, `Color`, `Rect` and many others, but we’ll need to reconsider the approach for more involved chart types. There is no `double.lerp` in the Dart SDK, so we’re using the function `lerpDouble` from the `dart:ui` package to the same effect. +我在遵循一个 Flutter SDK 约定,在 `Bar` 类的静态方法中定义 `BarTween.lerp`。这适用于简单类型,如 “Bar”,“Color”,“Rect” 等等,但我们需要重新考虑更多涉及图表类型的方法。Dart SDK 中没有 `double.lerp`,所以我们使用 `dart:ui` 包中的 `lerpDouble` 函数来达到同样的效果。 -Our app can now be re-expressed in terms of bars as shown in the code below; I’ve taken the opportunity to dispense of the `dataSet` field. +我们的应用程序现在可以用 Bar 重新表达,如下面的代码所示;我借此机会调用 `dataSet`。 ``` import 'dart:math'; @@ -598,11 +598,11 @@ class ChartPageState extends State with TickerProviderStateMixin { } ``` -The new version is longer, and the extra code should carry its weight. It will, as we tackle increased chart complexity in [part two](https://medium.com/@mravn/zero-to-one-with-flutter-part-two-5aa2f06655cb). Our requirements speak of colored bars, multiple bars, partial data, stacked bars, grouped bars, stacked and grouped bars, … all of it animated. Stay tuned. +新版本更长,额外的代码被添加。这些代码将会出现,当我们在[第二部分](https://medium.com/@mravn/zero-to-one-with-flutter-part-two-5aa2f06655cb)中解决增加的图表复杂性时。我们的要求涉及彩条,多条,部分数据,堆叠条,分组条,堆叠和分组条,...所有这些都是动画的。敬请关注。 ![](https://cdn-images-1.medium.com/max/800/1*n76TpChNv8Q25WrfBiuWpw.gif) -A preview of one of the animations we’ll do in part two. +我们将在第二部分中对其中一个动画进行预览。 > 如果发现译文存在错误或其他需要改进的地方,欢迎到 [掘金翻译计划](https://github.com/xitu/gold-miner) 对译文进行修改并 PR,也可获得相应奖励积分。文章开头的 **本文永久链接** 即为本文在 GitHub 上的 MarkDown 链接。 From 087db0b833f9c4dbbeb9274d24187ef0e1e52389 Mon Sep 17 00:00:00 2001 From: LeviDing Date: Mon, 7 Jan 2019 15:49:51 +0800 Subject: [PATCH 27/54] Update zero-to-one-with-flutter.md --- TODO1/zero-to-one-with-flutter.md | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/TODO1/zero-to-one-with-flutter.md b/TODO1/zero-to-one-with-flutter.md index 113cf1a3dd1..5fb12df1717 100644 --- a/TODO1/zero-to-one-with-flutter.md +++ b/TODO1/zero-to-one-with-flutter.md @@ -3,12 +3,11 @@ > * 译文出自:[掘金翻译计划](https://github.com/xitu/gold-miner) > * 本文永久链接:[https://github.com/xitu/gold-miner/blob/master/TODO1/zero-to-one-with-flutter.md](https://github.com/xitu/gold-miner/blob/master/TODO1/zero-to-one-with-flutter.md) > * 译者:[hongruqi](https://github.com/hongruqi) -> * 校对者: # Flutter 从 0 到 1 -2016 年夏末,丹麦奥古斯谷歌办公室。我来谷歌的第一个任务,是使用 [_Flutter_](https://flutter.io) 和 [_Dart_](https://www.dartlang.org) 在 Android/iOS 应用程序中实现动画图表。除了是一个谷歌新人之外,我对 Flutter,Dart,动画都不熟悉。事实上,我之前从未做过移动应用程序。我的第一部智能手机也只有几个月的历史——我是在一阵恐慌中买的,因为担心使用我的老诺基亚可能会导致电话面试失败... -我确实对桌面Java中的图表有过一些经验,但哪些图表并不是动画的。我感到...不可思议。部分是恐龙,部分重生. +2016 年夏末,丹麦奥古斯谷歌办公室。我来谷歌的第一个任务,是使用 [_Flutter_](https://flutter.io) 和 [_Dart_](https://www.dartlang.org) 在 Android/iOS 应用程序中实现动画图表。除了是一个谷歌新人之外,我对 Flutter,Dart,动画都不熟悉。事实上,我之前从未做过移动应用程序。我的第一部智能手机也只有几个月的历史——我是在一阵恐慌中买的,因为担心使用我的老诺基亚可能会导致电话面试失败... +我确实对桌面Java中的图表有过一些经验,但哪些图表并不是动画的。我感到...不可思议。部分是恐龙,部分重生。 ![](https://cdn-images-1.medium.com/max/800/1*2t8GffL0BcNoGLU-IgHT9w.jpeg) From 77c99a3496cf643a2ed0e78f67bde4445c0bd05e Mon Sep 17 00:00:00 2001 From: LeviDing Date: Mon, 7 Jan 2019 22:25:59 +0800 Subject: [PATCH 28/54] Create a-comprehensive-look-back-at-frontend-in-2018.md --- ...rehensive-look-back-at-frontend-in-2018.md | 173 ++++++++++++++++++ 1 file changed, 173 insertions(+) create mode 100644 TODO1/a-comprehensive-look-back-at-frontend-in-2018.md diff --git a/TODO1/a-comprehensive-look-back-at-frontend-in-2018.md b/TODO1/a-comprehensive-look-back-at-frontend-in-2018.md new file mode 100644 index 00000000000..707196bd0e1 --- /dev/null +++ b/TODO1/a-comprehensive-look-back-at-frontend-in-2018.md @@ -0,0 +1,173 @@ +> * 原文地址:[A comprehensive look back at front-end in 2018](https://blog.logrocket.com/a-comprehensive-look-back-at-frontend-in-2018-8122e724a802) +> * 原文作者:[Kaelan Cooter](https://blog.logrocket.com/@eranimo) +> * 译文出自:[掘金翻译计划](https://github.com/xitu/gold-miner) +> * 本文永久链接:[https://github.com/xitu/gold-miner/blob/master/TODO1/a-comprehensive-look-back-at-frontend-in-2018.md](https://github.com/xitu/gold-miner/blob/master/TODO1/a-comprehensive-look-back-at-frontend-in-2018.md) +> * 译者: +> * 校对者: + +A comprehensive look back at front-end in 2018 + +Grab a coffee, settle in, and read slow. Our review doesn’t miss much. + +![](https://cdn-images-1.medium.com/max/800/1*h4mMvgiilV-JPS1Ytpndyg.png) + +Web development has always been a fast-moving field — making it hard to keep up with all the browser changes, library releases, and new programming trends that can inundate your mind over the course of the year. + +The industry is growing bigger every year, making it harder for the average developer to keep up. So let’s take a step back and review what changed in the web development community in 2018. + +We have witnessed an explosive evolution of Javascript over the last few years. As the internet became even more important to the global economy, powerful commercial stakeholders like Google and Microsoft realized that they needed better tools to create the next generation of web applications. + +This led to the largest wave of changes to Javascript since its inception, starting with ECMAScript 2015 (aka ES6). There are now yearly releases which have brought us exciting new features like classes, generators, iterators, promises, a completely new module system, and much more. + +This launched a golden age of web development. Many of the most popular tools, libraries, and frameworks today first became popular right after ES2015 was released. Even before major browser vendors supported even half of the new standard, the [Babel](https://babeljs.io/) compiler project allowed thousands of developers to get a head start and experiment with the new features themselves. + +Frontend developers for the first time were not beholden to the oldest browser their company supports but were free to innovate at their own pace. Three years and three ECMAScript editions later, this new age of web development shows no signs of slowing down. + +### New JS language features + +Compared to previous editions, ECMAScript 2018 was rather light feature-wise, only adding [object rest / spread properties](https://github.com/tc39/proposal-object-rest-spread), [asynchronous iteration](https://github.com/tc39/proposal-async-iteration), and [Promise.finally](https://github.com/tc39/proposal-promise-finally), all of which have been supported by Babel and [core-js](https://github.com/zloirock/core-js#stage-3-proposals) for a while now. [Most browser](http://kangax.github.io/compat-table/es2016plus/#test-Asynchronous_Iterators) and [Node.js](https://node.green/) all of ES2018 except Edge, which only supports Promise.finally. For many developers, this means that all the language features they need are supported in all browsers they support — some wonder whether Babel is really necessary anymore. + +### New regular expression features + +Javascript has always been lacking some of the more advanced regular expression features that other languages like Python have — that is, until now. ES2018 adds four new features: + +* [Lookbehind assertions](https://github.com/tc39/proposal-regexp-lookbehind), providing the missing complement to the lookahead assertions that have been in the language since all the way back in 1999. +* [s (dotAll) flag](https://github.com/tc39/proposal-regexp-dotall-flag), which matches any single character except line terminators. +* [Named capture groups](https://github.com/tc39/proposal-regexp-named-groups), which make using regular expressions easier by allowing property-based lookup for capture groups. +* [Unicode property escape](https://github.com/tc39/proposal-regexp-unicode-property-escapes), which makes it possible to write regular expressions that are aware of unicode. + +Although many of these features have had workarounds and alternative libraries for years, none could hope to match the speed that native implementations provide. + +### New browser features + +There has been an incredible amount of new Javascript browser APIs released this year. Almost everything has seen improvement — web security, high-performance computing, and animations to name a few. Let’s break them down by domain to get a better sense of their impact. + +### WebAssembly + +Although WebAssembly v1 support was added to major browsers last year, it has not yet been widely adopted by the developer community. The WebAssembly Group has an [ambitious feature roadmap](https://webassembly.org/docs/future-features/) for features like [garbage collection](https://github.com/WebAssembly/gc), ECMAScript module integration, and [threads](https://developers.google.com/web/updates/2018/10/wasm-threads). Perhaps with these features, we will start to see widespread adoption in web applications. + +Part of the problem is that WebAssembly requires a lot of setup to get started and many developers used to Javascript are not familiar with working with traditional compiled languages. Firefox launched an online IDE called [WebAssembly Studio](https://hacks.mozilla.org/2018/04/sneak-peek-at-webassembly-studio/) that makes it as easy as possible to get started with WebAssembly. If you’re looking to integrate it into an existing app, there are now plenty of tools to choose from. Webpack v4 added experimental [built-in support](https://github.com/webpack/webpack/releases/tag/v4.0.0) for WebAssembly modules tightly integrated into the build and module systems and with source map support. + +Rust has emerged as a favorite language to compile to WebAssembly. It offers a robust package ecosystem with [cargo](https://github.com/rust-lang/cargo), reliable performance, and an [easy to learn](https://doc.rust-lang.org/book/) syntax. There’s already an emerging ecosystem of tools that integrate Rust with Javascript. You can publish Rust WebAssembly packages to NPM using [wasm-pack](https://github.com/rustwasm/wasm-pack). If you use Webpack, you can now seamlessly integrate Rust code in your app using the [rust-native-wasm-loader](https://github.com/dflemstr/rust-native-wasm-loader). + +If you’d rather not abandon Javascript to use WebAssembly, you’re in luck — there are now several options to choose from. If you’re familiar with Typescript, there’s the [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) project which uses the official [Binaryen](https://github.com/WebAssembly/binaryen) compiler together with Typescript. + +Therefore, it works well with existing Typescript and WebAssembly tools. [Walt](https://github.com/ballercat/walt) is another compiler that sticks to the Javascript syntax (with Typescript-like type hints) and compiles directly to the WebAssembly text format. It has zero dependencies, very fast compilation times, and integration with Webpack. Both projects are in active development, and depending on your standards they might not be considered “production ready”. Regardless, they are worth checking out. + +### Shared memory + +Modern Javascript applications often do heavy computation in [Web Workers](https://developer.mozilla.org/en-US/docs/Web/API/Worker) to avoid blocking the main thread and interrupting the browsing experience. While workers have been available for several years now, their limitations prevented them from being more widely adopted. Workers can transfer data between other threads using the [postMessage](https://developer.mozilla.org/en-US/docs/Web/API/Worker/postMessage) method, which either clones the data being sent (slower) or uses [transferable objects](https://developer.mozilla.org/en-US/docs/Web/API/Transferable) (faster). Thus, communication between threads is either slow or one-way. For simple applications this is fine, but it has prevented more complex architectures from being built using workers. + +[SharedArrayBuffer](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/SharedArrayBuffer) and [Atomics](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Atomics) are new features that allow Javascript applications to share a fixed memory buffer between contexts and perform atomic operations on them. However, browser support was temporarily removed after it was discovered that shared memory makes browsers vulnerable to a previously unknown timing attack known as [Spectre](https://meltdownattack.com/). Chrome re-enabled SharedArrayBuffers in July when they released a [new security feature](https://www.techrepublic.com/article/google-enabled-site-isolation-in-chrome-67-heres-why-and-how-it-affects-users/) which mitigated the vulnerability. In Firefox its disabled by default but can be [re-enabled](https://blog.mozilla.org/security/2018/01/03/mitigations-landing-new-class-timing-attack/). Edge has [removed support completely](https://blogs.windows.com/msedgedev/2018/01/03/speculative-execution-mitigations-microsoft-edge-internet-explorer/#Yr2pGlOHTmaRJrLl.97) and Microsoft hasn’t indicated when it’s going to be re-enabled. Hopefully, by next year all browsers will have mitigation strategies in place so that this critical missing feature can be used. + +### Canvas + +Graphics APIs such as Canvas and WebGL have been supported for several years now, but they have always been limited to rendering only in the main thread. Rendering can, therefore, be blocking. And that leads to poor user experiences. The [OffscreenCanvas](https://developer.mozilla.org/en-US/docs/Web/API/OffscreenCanvas#Asynchronous_display_of_frames_produced_by_an_OffscreenCanvas) API solves that problem by allowing you to transfer control of a canvas context (2D or WebGL) to a web worker. The worker can then use the Canvas APIs like normal while render seamlessly in the main thread without blocking. + +Given the significant performance savings, you can expect chart and drawing libraries to look into supporting it soon. [Browser support](https://caniuse.com/#feat=offscreencanvas) right now is limited to Chrome, Firefox supports it behind a flag, and the Edge team hasn’t made any public indication of support. You can expect it to pair well with SharedArrayBuffers and WebAssembly, allowing a Worker to render based on data existing in any thread, from code written in any language, all without a janky user experience. This might make the dream of high-end gaming on the web a reality, and allow even more complex graphics in web applications. + +There is a major effort underway to bring new drawing and layout APIs to CSS. The goal is to expose parts of the CSS engine to web developers to demystify some of the “magic” of CSS. The [CSS Houdini Task Force](https://github.com/w3c/css-houdini-drafts/wiki) at W3C, made up of engineers from major browser vendors has been hard at work over the last two years publishing [several draft specifications](https://drafts.css-houdini.org/) which are now in the final stages of design. + +The [CSS Paint API](https://developers.google.com/web/updates/2018/01/paintapi) is among the first to reach browsers, landing in Chrome 65 back in January. It allows developers to paint an image using a context-like API that can be used wherever an image is called for in CSS. It uses the new [Worklet](https://drafts.css-houdini.org/worklets) interface, which are basically lightweight, high-performance [Worker](https://developer.mozilla.org/en-US/docs/Web/API/Worker)-like constructs intended for specialized tasks. Like Workers, they run in their own execution context, but unlike Workers, they are thread-agnostic (the browser chooses what thread they run on) and they have access to the rendering engine. + +With a Paint Worklet, you could create a background image that automatically redraws when the element it’s contained in changes. Using CSS properties you can add parameters that trigger re-drawing when changed and can be controlled via Javascript. [All browsers](https://ishoudinireadyyet.com/) except Edge have pledged support, but for now, there’s a [polyfill](https://github.com/GoogleChromeLabs/css-paint-polyfill). With this API we will begin to see componentized images used in a similar way we now see components. + +### Animations + +Most modern web applications use animations as an essential part of the user experience. Frameworks like Google’s Material Design have made them essential parts of their [design language](https://material.io/design/motion/understanding-motion.html#principles), arguing that they are essential to making expressive and easy-to-understand user experiences. Given their elevated importance, there has been a push recently to bring a more powerful animations API to Javascript, and this has resulted in the Web Animations API (WAAPI). + +As [CSS-Tricks notes](https://css-tricks.com/css-animations-vs-web-animations-api/), WAAPI offers a significantly better developer experience over just CSS animations, and you can easily log and manipulate the state of an animation defined in JS or CSS. [Browser support](https://caniuse.com/#feat=web-animation) at the moment is mostly limited to Chrome and Firefox, but there is an [official polyfill](https://github.com/web-animations/web-animations-js/tree/master) that does everything you need. + +Performance has always been an issue with web animations, and this has been addressed by introducing the [Animation Worklet](https://wicg.github.io/animation-worklet/). This new API allows complex animations to run in parallel — meaning higher frame rate animations that aren’t impacted by main thread jank. Animation Worklets follow the same interface as the Web Animations API, but inside the Worklet execution context. + +It’s [due to be released](https://www.chromestatus.com/features/5762982487261184) in Chrome 71 (the next version as of the time of writing), and other browsers likely sometime next year. There’s an official [polyfill and example repo](https://github.com/GoogleChromeLabs/houdini-samples/tree/master/animation-worklet) available on GitHub if you’d like to try it out today. + +### Security + +The Spectre timing attack wasn’t the only web security scare of the year. The inherent vulnerability of NPM has been [written about a lot in the past](https://hackernoon.com/im-harvesting-credit-card-numbers-and-passwords-from-your-site-here-s-how-9a8cb347c5b5), and last month we got an [alarming reminder](https://blog.logrocket.com/the-latest-npm-breach-or-is-it-a427617a4185). This was not a security breach of NPM itself, but a single package called [event-stream](https://www.npmjs.com/package/event-stream) that is used by many popular packages. NPM allows package authors to transfer ownership to any other member, and the hacker convinced the owner to transfer it to them. The hacker then published a new version with a new dependency on a package they created called [flatmap-stream](https://www.npmjs.com/package/flatmap-stream), which had code designed to steal [bitcoin wallets](https://copay.io/) if the malicious package was installed alongside the [copay-dash](https://www.npmjs.com/package/copay-dash) package. + +These kinds of attacks will only become more common given how NPM works and the communities’ cavalier propensity to install random NPM packages that appear useful. The community places a great deal of trust on package owners, trust that has been questioned greatly. NPM users should aware of each package they are installing (dependencies of dependencies included), use a lock file to lock down versions and sign up for security alerts like those [provided by Github](https://blog.github.com/2017-11-16-introducing-security-alerts-on-github/). + +NPM is [aware of the security concerns](https://blog.npmjs.org/post/172774747080/attitudes-to-security-in-the-javascript-community) of the community and they have taken steps to improve it over the last year. You can now secure your NPM account with [two-factor authentication](https://blog.npmjs.org/post/166039777883/protect-your-npm-account-with-two-factor), and NPM v6 now includes a [security audit](https://docs.npmjs.com/auditing-package-dependencies-for-security-vulnerabilities) command. + +### Monitoring + +The [Reporting API](https://developers.google.com/web/updates/2018/09/reportingapi) is a new standard that aims to make it easier for developers to discover problems with their applications by alerting when issues happen. If you’ve used the Chrome DevTools console within the last few years you might have seen the _\[intervention\]_ warning messages for using obsolete APIs or doing potentially unsafe things. These messages have been limited to the client, but now you can report them to analytics tools using the new [ReportingObserver](https://developers.google.com/web/updates/2018/07/reportingobserver). + +There are two kinds of reports: + +* [Deprecations](https://developers.google.com/web/updates/tags/deprecations), which warn you when you’re using an obsolete API and tell you when it’s expected to be removed. It will also tell you filename and line number of where it was used. +* [Interventions](https://www.chromestatus.com/features#intervention), which warn you when you’re using an API in an unintended, dangerous, or insecure way. + +While tools like [LogRocket](https://logrocket.com/) give developers insight into errors in their applications. Until now, there hasn’t been any reliable way for third-party tools to record these warnings. This means issues either go unnoticed or manifest themselves as difficult-to-debug error messages. Chrome currently supports the ReportingObserver API, and other browsers will support it soon. + +### CSS + +Although Javascript gets all the attention, there have been several interesting new CSS features landing in browsers this year. + +For those unaware, there is no unified CSS3 specification analogous to ECMAScript. The last official unified standard was CSS2.1, and CSS3 has come to refer to anything published after that. Instead, each section is standardized separately as a “CSS Module”. MDN has an [excellent overview](https://developer.mozilla.org/en-US/docs/Web/CSS/CSS3) of each module standard and their status. + +As of 2018, some newer features are now fully supported in all major browsers (this is 2018, IE is not a major browser). This includes [flexbox](https://blog.logrocket.com/flexing-with-css-flexbox-b7940b329a8a), [custom properties](https://caniuse.com/#feat=css-variables) (variables), and [grid layout](https://blog.logrocket.com/the-simpletons-guide-to-css-grid-1767565b3cf7). + +While there has been [talk in the past](https://tabatkins.github.io/specs/css-nesting/) of adding support for nested rules to CSS (like LESS and SASS), those proposals didn’t go anywhere. In July the CSS working group at W3C [decided to take](https://github.com/w3c/csswg-drafts/issues/2701#issuecomment-402392212) another look at the proposal, but it’s unclear if it’s a priority at all. + +### Node.js + +Node continues to make excellent progress keeping up with ECMAScript standards and as of December, they [support all of ES2018](https://node.green/). On the other hand, they have been slow to adopt the ECMAScript module system and thus are missing a critical feature that is preventing feature parity with browsers, which have supported ES modules for over a year now. Node actually added [experimental support](https://nodejs.org/api/esm.html) in v11.4.0 behind a flag, but this requires that files use the new .mjs extension, leading to [concerns](https://github.com/nodejs/modules/issues/57) about slow adoption and what impact this would have on Node’s rich package ecosystem. + +If you’re looking to get a jump start and you’d rather not use the experimental build-in support, there’s an interesting project from the creator of Lodash called [esm](https://medium.com/web-on-the-edge/tomorrows-es-modules-today-c53d29ac448c) which gives Node ES module support with better interoperability and performance than the official solution. + +### Tools and Frameworks + +#### React + +[React](https://reactjs.org/) had two notable releases this year. React 16.3 shipped with support for a new set of [lifecycle methods](https://reactjs.org/blog/2018/03/29/react-v-16-3.html#component-lifecycle-changes) and a new official [Context API](https://reactjs.org/blog/2018/03/29/react-v-16-3.html#official-context-api). React 16.6 added a new feature called “Suspense” that gives React the ability to suspend rendering while components wait for a task to be completed like data fetching or [code splitting](https://reactjs.org/docs/code-splitting.html#reactlazy). + +The most talked about React topic this year was the introduction of [React Hooks](https://reactjs.org/docs/hooks-intro.html). The proposal was designed to make it easier to write smaller components without sacrificing useful features that were until now limited to class components. React will ship with two built-in hooks, the State Hook, which lets functional components use state, and the [Effect Hook](https://reactjs.org/docs/hooks-effect.html#tip-use-multiple-effects-to-separate-concerns), which lets you perform side effects in function components. While there is no plan to remove classes from React, the React team clearly intends Hooks to be central to the future of React. After they were announced, there was a positive reaction from the community ([some might say overhyped](https://twitter.com/dan_abramov/status/1057027428827193344)). If you’re interested in learning more, check out [Dan Abramov’s blog post](https://medium.com/@dan_abramov/making-sense-of-react-hooks-fdbde8803889) comprehensive overview. + +Next year React plans on releasing a new feature called [Concurrent mode](https://reactjs.org/blog/2018/11/27/react-16-roadmap.html#react-16x-q2-2019-the-one-with-concurrent-mode) (formerly known as “async mode” or “async rendering”). This would allow React to render large component trees without blocking the main thread. For large apps with deep component trees, the performance savings could be significant. It’s unclear exactly what the API looks like at the moment, but the React team is aiming to finalize it soon and release sometime next year. If you’re interested in adopting this feature, make sure your codebase is compatible by adopting the new lifecycle methods released in React 16.3. + +React continues to grow in popularity, and [according to the State of JavaScript 2018 survey](https://2018.stateofjs.com/front-end-frameworks/react/) 64% of those polled use it and would use it again (+7.1% since last year), compared to [28% for Vue](https://2018.stateofjs.com/front-end-frameworks/vuejs/) (+9.2%) and [23% for Angular](https://2018.stateofjs.com/front-end-frameworks/angular/) (+5.1%). + +#### Webpack + +[Webpack](https://webpack.js.org) 4 was [released in February](https://github.com/webpack/webpack/releases/tag/v4.0.0-beta.0), bringing huge performance improvements, build-in production and development modes, easy to use optimizations like code splitting and minification, experimental WebAssembly support, and ECMAScript module support. Webpack is now much easier to use than previous versions and previously complicated features like code splitting and optimization are now quite simple to set up. Combined with Typescript or Babel, webpack remains the bedrock tool for web developers and it seems unlikely a competitor will come along and replace it in the near future. + +#### Babel + +[Babel](https://babeljs.io) 7 was [released this August](https://babeljs.io/blog/2018/08/27/7.0.0), the first major release in almost three years. Major changes include [faster build times](https://twitter.com/left_pad/status/927554660508028929), a new package namespace, and deprecation of the various “stage” and yearly ECMASCript preset packages in favor of [preset-env](https://babeljs.io/docs/en/next/babel-preset-env.html), which vastly simplifies configuring Babel by automatically including the plugins you need for the browsers you support. This release also adds [automatic polyfilling](https://babeljs.io/blog/2018/08/27/7.0.0#automatic-polyfilling-experimental), which removes the need to either import the entire Babel polyfill (which is rather large) or explicitly importing the polyfills you need (which can be time-consuming and error-prone). + +Babel also now [supports the Typescript syntax](https://blogs.msdn.microsoft.com/typescript/2018/08/27/typescript-and-babel-7/), making it easier for developers to use Babel and Typescript together. Babel 7.1 also added support for the new [decorators proposal](https://babeljs.io/blog/2018/09/17/decorators), which is incompatible with the obsolete proposal widely adopted by the community but matches what browsers will be supporting. Thankfully, the Babel team has published a [compatibility package](https://babeljs.io/blog/2018/09/17/decorators#upgrading) that will make upgrading easier. + +#### Electron + +[Electron](https://electronjs.org/) continues to be the most popular way to package Javascript applications for the desktop, although whether or not that’s a good thing is somewhat of a controversy. Some of the most popular desktop applications now use Electron to reduce development costs by making it easy to develop cross-platform applications. + +A [common complaint](https://www.theverge.com/circuitbreaker/2018/5/16/17361696/chrome-os-electron-desktop-applications-apple-microsoft-google) is that applications that use Electron tend to use too much memory since each app packages an entire instance of Chrome (which is very memory-intensive). [Carlo](https://github.com/GoogleChromeLabs/carlo) is an Electron alternative from Google that uses the locally installed version of Chrome (which it requires), resulting in a less memory hungry application. Electron itself hasn’t made much progress with improving performance, and [recent updates](https://electronjs.org/blog/electron-3-0) have focused on updating the Chrome dependency and small API changes. + +#### Typescript + +[Typescript](https://www.typescriptlang.org/) has greatly increased in popularity over the last year, emerging as a genuine challenger to ES6 as the dominant flavor of JavaScript. Since Microsoft releases new versions monthly, development has progressed pretty rapidly over the last year. The Typescript team has put a strong focus on developer experience, for both the language itself and the editor tools that surround it. + +Recent releases have added more developer-friendly [error formatting](https://blogs.msdn.microsoft.com/typescript/2018/07/30/announcing-typescript-3-0/#improved-errors-and-ux) and powerful refactoring features like [automatic import updating](https://blogs.msdn.microsoft.com/typescript/2018/05/31/announcing-typescript-2-9/#rename-move-file) and [import organizing](https://blogs.msdn.microsoft.com/typescript/2018/03/27/announcing-typescript-2-8/#organize-imports), among others. At the same time, work continues on improving the type system with recent features like [conditional types](https://blogs.msdn.microsoft.com/typescript/2018/03/27/announcing-typescript-2-8/#conditional-types) and [unknown type](https://blogs.msdn.microsoft.com/typescript/2018/07/30/announcing-typescript-3-0/#the-unknown-type). + +The State of JavaScript Survey 2018 notes that [nearly half of respondents](https://2018.stateofjs.com/javascript-flavors/typescript/) use TypeScript, with a strong upward trend over the last two years. In contrast, it’s chief competitor Flow has [stagnated](https://2018.stateofjs.com/javascript-flavors/flow/) in popularity, with most developers saying they dislike its lack of tooling and popular momentum. Typescript is admired for making it easy for developers to write robust and elegant code backed up by powerful editor support. It’s sponsor, Microsoft, seems to be more willing to support it than Facebook is with Flow, and developers have clearly noticed. + +* * * + +### Plug: [LogRocket](https://logrocket.com/signup/), a DVR for web apps + +[![](https://cdn-images-1.medium.com/max/1000/1*s_rMyo6NbrAsP-XtvBaXFg.png)](https://logrocket.com/signup/) + +[LogRocket](https://logrocket.com/signup/) is a frontend logging tool that lets you replay problems as if they happened in your own browser. Instead of guessing why errors happen, or asking users for screenshots and log dumps, LogRocket lets you replay the session to quickly understand what went wrong. It works perfectly with any app, regardless of framework, and has plugins to log additional context from Redux, Vuex, and @ngrx/store. + +In addition to logging Redux actions and state, LogRocket records console logs, JavaScript errors, stacktraces, network requests/responses with headers + bodies, browser metadata, and custom logs. It also instruments the DOM to record the HTML and CSS on the page, recreating pixel-perfect videos of even the most complex single page apps. + +[Try it for free.](https://logrocket.com/signup/) + +> 如果发现译文存在错误或其他需要改进的地方,欢迎到 [掘金翻译计划](https://github.com/xitu/gold-miner) 对译文进行修改并 PR,也可获得相应奖励积分。文章开头的 **本文永久链接** 即为本文在 GitHub 上的 MarkDown 链接。 + + +--- + +> [掘金翻译计划](https://github.com/xitu/gold-miner) 是一个翻译优质互联网技术文章的社区,文章来源为 [掘金](https://juejin.im) 上的英文分享文章。内容覆盖 [Android](https://github.com/xitu/gold-miner#android)、[iOS](https://github.com/xitu/gold-miner#ios)、[前端](https://github.com/xitu/gold-miner#前端)、[后端](https://github.com/xitu/gold-miner#后端)、[区块链](https://github.com/xitu/gold-miner#区块链)、[产品](https://github.com/xitu/gold-miner#产品)、[设计](https://github.com/xitu/gold-miner#设计)、[人工智能](https://github.com/xitu/gold-miner#人工智能)等领域,想要查看更多优质译文请持续关注 [掘金翻译计划](https://github.com/xitu/gold-miner)、[官方微博](http://weibo.com/juejinfanyi)、[知乎专栏](https://zhuanlan.zhihu.com/juejinfanyi)。 From 0eea8f14fe4e9b9f5683553e3fa4d66099bf696b Mon Sep 17 00:00:00 2001 From: LeviDing Date: Tue, 8 Jan 2019 21:01:49 +0800 Subject: [PATCH 29/54] Update composing-software-an-introduction.md --- TODO1/composing-software-an-introduction.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/TODO1/composing-software-an-introduction.md b/TODO1/composing-software-an-introduction.md index ac260652db3..800696be8fa 100644 --- a/TODO1/composing-software-an-introduction.md +++ b/TODO1/composing-software-an-introduction.md @@ -1,5 +1,5 @@ > * 原文地址:[Composing Software: An Introduction](https://medium.com/javascript-scene/composing-software-an-introduction-27b72500d6ea) -> * 原文作者:[Eric Elliott](Eric Elliott) +> * 原文作者:[Eric Elliott](https://medium.com/@_ericelliott) > * 译文出自:[掘金翻译计划](https://github.com/xitu/gold-miner) > * 本文永久链接:[https://github.com/xitu/gold-miner/blob/master/TODO1/composing-software-an-introduction.md](https://github.com/xitu/gold-miner/blob/master/TODO1/composing-software-an-introduction.md) > * 译者:[Sam](https://github.com/xutaogit) From 0be45bbc48717f5ec1e063fabfa7a37fd96a5216 Mon Sep 17 00:00:00 2001 From: Starrier <1342878298@qq.com> Date: Tue, 8 Jan 2019 21:13:45 +0800 Subject: [PATCH 30/54] =?UTF-8?q?=E5=88=A9=E7=94=A8=20Python=E4=B8=AD?= =?UTF-8?q?=E7=9A=84=20Bokeh=20=E5=AE=9E=E7=8E=B0=E6=95=B0=E6=8D=AE?= =?UTF-8?q?=E5=8F=AF=E8=A7=86=E5=8C=96=EF=BC=8C=E7=AC=AC=E4=BA=8C=E9=83=A8?= =?UTF-8?q?=E5=88=86=EF=BC=9A=E4=BA=A4=E4=BA=92=20(#4941)?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * Starrier:data-visualization-2 * Starrier:update the translation in my self. * Starrier:update the article with some modifications. * Update data-visualization-with-bokeh-in-python-part-ii-interactions.md --- ...th-bokeh-in-python-part-ii-interactions.md | 204 +++++++++--------- 1 file changed, 102 insertions(+), 102 deletions(-) diff --git a/TODO1/data-visualization-with-bokeh-in-python-part-ii-interactions.md b/TODO1/data-visualization-with-bokeh-in-python-part-ii-interactions.md index 91a49d99dbb..07f497fcd02 100644 --- a/TODO1/data-visualization-with-bokeh-in-python-part-ii-interactions.md +++ b/TODO1/data-visualization-with-bokeh-in-python-part-ii-interactions.md @@ -2,73 +2,73 @@ > * 原文作者:[Will Koehrsen](https://towardsdatascience.com/@williamkoehrsen?source=post_header_lockup) > * 译文出自:[掘金翻译计划](https://github.com/xitu/gold-miner) > * 本文永久链接:[https://github.com/xitu/gold-miner/blob/master/TODO1/data-visualization-with-bokeh-in-python-part-ii-interactions.md](https://github.com/xitu/gold-miner/blob/master/TODO1/data-visualization-with-bokeh-in-python-part-ii-interactions.md) -> * 译者: -> * 校对者: +> * 译者:[Starrier](https://github.com/Starrier) +> * 校对者:[TrWestdoor](https://github.com/TrWestdoor) -# Data Visualization with Bokeh in Python, Part II: Interactions +# 利用 Python 中 Bokeh 实现数据可视化,第二部分:交互 -**Moving beyond static plots** +**超越静态图的图解** -In the [first part](https://towardsdatascience.com/data-visualization-with-bokeh-in-python-part-one-getting-started-a11655a467d4) of this series, we walked through creating a basic histogram in [Bokeh](https://bokeh.pydata.org/en/latest/), a powerful Python visualization library. The final result, which shows the distribution of arrival delays of flights departing New York City in 2013 is shown below (with a nice tooltip!): +本系列的[第一部分](https://github.com/xitu/gold-miner/blob/master/TODO1/data-visualization-with-bokeh-in-python-part-one-getting-started.md) 中,我们介绍了在 [Bokeh](https://bokeh.pydata.org/en/latest/)(Python 中一个强大的可视化库)中创建的一个基本柱状图。最后的结果显示了 2013 年从纽约市起飞的航班延迟到达的分布情况,如下所示(有一个非常好的工具提示): ![](https://cdn-images-1.medium.com/max/800/1*rNBU4zoqIk_iEzMGufiRhg.png) -This chart gets the job done, but it’s not very engaging! Viewers can see the distribution of flight delays is nearly normal (with a slight positive skew), but there’s no reason for them to spend more than a few seconds with the figure. +这张表完成了任务,但并不是很吸引人!用户可以看到航班延迟的几乎是正常的(有轻微的斜率),但他们没有理由在这个数字上花几秒钟以上的时间。 -If we want to create more engaging visualization, we can allow users to explore the data on their own through interactions. For example, in this histogram, one valuable feature would be the ability to select specific airlines to make comparisons or the option to change the width of the bins to examine the data in finer detail. Fortunately, these are both features we can add on top of our existing plot using Bokeh. The initial development of the histogram may have seemed involved for a simple plot, but now we get to see the payoff of using a powerful library like Bokeh! +如果我们想创建更吸引人的可视化数据,可以允许用户通过交互方式来获取他们想要的数据。比如,在这个柱状图中,一个有价值的特性是能够选择指定航空公司进行比较,或者选择更改容器的宽度来更详细地检查数据。辛运的是,我们可以使用 Bokeh 在现有的绘图基础上添加这两个特性。柱状图的最初开发似乎只涉及到了一个简单的图,但我们现在即将体验到像 Bokeh 这样的强大的库的所带来的好处! -All the code for this series is [available on GitHub](https://github.com/WillKoehrsen/Bokeh-Python-Visualization/tree/master/interactive). I encourage anyone to check it out for all the data cleaning details (an uninspiring but necessary part of data science) and to experiment with the code!(For interactive Bokeh plots, we can still use a Jupyter Notebook to show the results or we can write Python scripts and run a Bokeh server. For development, I usually work in a Jupyter Notebook because it is easier to rapidly iterate and change plots without having to restart the server. I then move to a server to display the final results. You can see both a standalone script and the full notebook on GitHub.) +本系列的所有代码[都可在 GitHub 上获得](https://github.com/WillKoehrsen/Bokeh-Python-Visualization/tree/master/interactive)。任何感兴趣的人都可以查看所有的数据清洗细节(数据科学中一个不那么鼓舞人心但又必不可少的部分),也可以亲自运行它们!(对于交互式 Bokeh 图,我们仍然可以使用 Jupyter Notebook 来显示结果,我们也可以编写 Python 脚本,并运行 Bokeh 服务器。我通常使用 Jupyter Notebook 进行开发,因为它可以在不重启服务器的情况下,就可以很容易的快速迭代和更改绘图。然后我将它们迁移到服务器中来显示最终结果。你可以在 GitHub 上看到一个独立的脚本和完整的笔记)。 -### Active Interactions +### 主动的交互 -There are two classes of interactions in Bokeh: passive and active. Passive interactions, covered in Part I, are also known as inspectors because they allow users to examine a plot in more detail but do not change the information displayed. One example is a tooltip that appears when a user hovers over a data point: +在 Bokeh 中,有两类交互:被动的和主动的。第一部分所描述的被动交互也称为 inspectors,因为它们允许用户更详细地检查一个图,但不允许更改显示的信息。比如,当用户悬停在数据点上时出现的工具提示: ![](https://cdn-images-1.medium.com/max/800/1*3A33DOx2NL0h53SfsgPrzg.png) -Tooltip, a passive interactor +工具提示,被动交互器 -The second class of interaction is called active because it changes the actual data displayed on the plot. This can be anything from selecting a subset of the data (such as specific airlines) to changing the degree of a polynomial regression fit. There are multiple types of [active interactions in Bokeh](https://bokeh.pydata.org/en/latest/docs/user_guide/interaction.html), but here we will focus on what are called “widgets”, elements that can be clicked on and that give the user control over some aspect of the plot. +第二类交互被称为 active,因为它更改了显示在绘图上的实际数据。这可以是从选择数据的子集(例如指定的航空公司)到改变匹配多项式回归拟合程度中的任何数据。在 Bokeh 中有多种类型的 [active 交互](https://bokeh.pydata.org/en/latest/docs/user_guide/interaction.html),但这里我们将重点讨论“小部件”,可以被单击,而且用户能够控制某些绘图方面的元素。 ![](https://cdn-images-1.medium.com/max/600/1*3DV5TiCbiSSmEck5BhOjnQ.png) ![](https://cdn-images-1.medium.com/max/600/1*1lcSC9fMxSd2nqul_twj2Q.png) -Example of Widgets (dropdown button and radio button group) +小部件示例(下拉按钮和单选按钮组) -When I view graphs, I enjoy playing with active interactions ([such as those on FlowingData](http://flowingdata.com/2018/01/23/the-demographics-of-others/)) because they allow me to do my own exploration of the data. I find it more insightful to discover conclusions from the data on my own (with some direction from the designer) rather than from a completely static chart. Moreover, giving users some amount of freedom allows them to come away with slightly different interpretations that can generate beneficial discussion about the dataset. +当我查看图时,我喜欢主动的交互([比如那些在 FlowingData 上的交互](http://flowingdata.com/2018/01/23/the-demographics-of-others/)),因为它们允许我自己去研究数据。我发现让人印象更深刻的是从我自己的数据中发现的结论(从设计者那里获取的一些研究方向),而不是从一个完全静态的图表中发现的结论。此外,给予用户一定程度的自由,可以让他们对数据集提出更有用的讨论,从而产生不同的解释。 -### Interaction Outline +### 交互概述 -Once we start adding active interactions, we need to move beyond single lines of code and into functions that encapsulate specific actions. For a Bokeh widget interaction, there are three main functions that to implement: +一旦我们开始添加主动交互,我们就需要越过单行代码,深入封装特定操作的函数。对于 Bokeh 小部件的交互,有三个主要函数可以实现: -* `make_dataset()` Format the specific data to be displayed -* `make_plot()`Draw the plot with the specified data -* `update()` Update the plot based on user selections +* `make_dataset()` 格式化想要显示的特定数据 +* `make_plot()` 用指定的数据进行绘图 +* `update()` 基于用户选择来更新绘图 -#### Formatting the Data +#### 格式化数据 -Before we can make the plot, we need to plan out the data that will be displayed. For our interactive histogram, we will offer users three controllable parameters: +在我们绘制这个图之前,我们需要规划将要显示的数据。对于我们的交互柱状图,我们将为用户提供三个可控参数: -1. Airlines displayed (called carriers in the code) -2. Range of delays on the plot, for example: -60 to +120 minutes -3. Width of histogram bin, 5 minutes by default +1. 航班显示(在代码中称为运营商) +2. 绘图中的时间延迟范围,例如:-60 到 120 分钟 +3. 默认情况下,柱状图的容器宽度是 5 分钟 -For the function that makes the dataset for the plot, we need to allow each of these parameters to be specified. To inform how we will transform the data in our `make_dataset` function, lets load in all the relevant data and inspect. +对于生成绘图数据集的函数,我们需要允许指定每个参数。为了告诉我们如何转换 `make_dataset` 函数中的数据,我们需要加载所有相关数据进行检查。 ![](https://cdn-images-1.medium.com/max/800/1*oGphn8rw5GEmy9-tnHanuA.png) -Data for histogram +柱状图数据 -In this dataset, each row is one separate flight. The `arr_delay`column is the arrival delay of the flight in minutes (negative numbers means the flight was early). In part I, we did some data exploration and know there are 327,236 flights with a minimum delay of -86 minutes and a maximum delay of +1272 minutes. In the `make_dataset`function, we will want to select airlines based on the `name` column in the dataframe and limit the flights by the `arr_delay` column. +在此数据集中,每一行都是一个单独的航班。`arr_delay` 列是航班到达延误数分钟(负数表示航班提前到达)。在第一部分中,我们做了一些数据探索,知道有 327,236 次航班,最小延误时间为 -86 分钟,最大延误时间为 1272 分钟。在 `make_dataset` 函数中,我们想基于 dataframe 中的 `name` 列来选择公司,并用 `arr_delay` 列来限制航班。 -To make the data for the histogram, we use the numpy function `histogram` which counts the number of data points in each bin. In our case, this is the number of flights in each specified delay interval. For part I, we made a histogram for all flights, but now we will do it by each carrier. As the number of flights for each carrier varies significantly, we can display the delays not in raw counts but in proportions. That is, the height on the plot corresponds to the fraction of all flights for a specific airline with a delay in the corresponding bin. To go from counts to a proportion, we divide the count by the total count for the airline. +为了生成柱状图的数据,我们使用 numpy 函数 `histogram` 来统计每个容器中的数据点数。在我们的示例中,这是每个指定延迟间隔中的航班数。对于第一部分,我们做了一个包含所有航班的柱状图,但现在我们会为每一个运营商都提供一个柱状图。由于每个航空公司的航班数目有很大差异,我们可以显示延迟而不是按原始数目显示,可以按比例显示。也就是说,图上的高度对应于特定航空公司的所有航班比例,该航班在相应的容器中有延迟。从计数到比例,我们除以航空公司的总数。 -Below is the full code for making the dataset. The function takes in a list of carriers that we want to include, the minimum and maximum delays to be plotted, and the specified bin width in minutes. +下面是生成数据集的完整代码。函数接受我们希望包含的运营商列表,要绘制的最小和最大延迟,以及制定的容器宽度(以分钟为单位)。 -``` +```Python def make_dataset(carrier_list, range_start = -60, range_end = 120, bin_width = 5): - # Check to make sure the start is less than the end! + # 为了确保起始点小于终点而进行检查 assert range_start < range_end, "Start must be less than end!" by_carrier = pd.DataFrame(columns=['proportion', 'left', 'right', @@ -76,67 +76,67 @@ def make_dataset(carrier_list, range_start = -60, range_end = 120, bin_width = 5 'name', 'color']) range_extent = range_end - range_start - # Iterate through all the carriers + # 遍历所有运营商 for i, carrier_name in enumerate(carrier_list): - # Subset to the carrier + # 运营商子集 subset = flights[flights['name'] == carrier_name] - # Create a histogram with specified bins and range + # 创建具有指定容器和范围的柱状图 arr_hist, edges = np.histogram(subset['arr_delay'], bins = int(range_extent / bin_width), range = [range_start, range_end]) - # Divide the counts by the total to get a proportion and create df + # 将极速除以总数,得到一个比例,并创建 df arr_df = pd.DataFrame({'proportion': arr_hist / np.sum(arr_hist), 'left': edges[:-1], 'right': edges[1:] }) - # Format the proportion + # 格式化比例 arr_df['f_proportion'] = ['%0.5f' % proportion for proportion in arr_df['proportion']] - # Format the interval + # 格式化间隔 arr_df['f_interval'] = ['%d to %d minutes' % (left, right) for left, right in zip(arr_df['left'], arr_df['right'])] - # Assign the carrier for labels + # 为标签指定运营商 arr_df['name'] = carrier_name - # Color each carrier differently + # 不同颜色的运营商 arr_df['color'] = Category20_16[i] - # Add to the overall dataframe + # 添加到整个 dataframe 中 by_carrier = by_carrier.append(arr_df) - # Overall dataframe + # 总体 dataframe by_carrier = by_carrier.sort_values(['name', 'left']) - # Convert dataframe to column data source + # 将 dataframe 转换为列数据源 return ColumnDataSource(by_carrier) ``` -(I know this is a post about Bokeh, but you can’t make a graph without formatted data, so I included the code to demonstrate my methods!) +(我知道这是一篇关于 Bokeh 的博客,但在你不能在没有格式化数据的情况下来生成图表,因此我使用了相应的代码来演示我的方法!) -The results of running the function with all of the carriers is below: +运行带有所需运营商的函数结果如下: ![](https://cdn-images-1.medium.com/max/800/1*yKvJztYW6m6k07FxaqdadQ.png) -As a reminder, we are using the Bokeh `quad` glyphs to make the histogram and so we need to provide the left, right, and top of the glyph (the bottom will be fixed at 0). These are in the `left`, `right`, and `proportion` columns respectively. The color column gives each carrier a unique color and the `f_` columns provide formatted text for the tooltips. +作为提醒,我们使用 Bokeh `quad` 表来制作柱状图,因此我们需要提供表的左、右和顶部(底部将固定为 0)。它们分别在罗列在 `left`、`right` 以及 `proportion`。颜色列为每个运营商提供了唯一的颜色,`f_` 列为工具提供了格式化文本的功能。 -The next function to implement is `make_plot`. The function should take in a ColumnDataSource [(a specific type of object used in Bokeh for plotting)](https://bokeh.pydata.org/en/latest/docs/reference/models/sources.html) and return the plot object: +下一个要实现的函数是 `make_plot`。函数应该接受 ColumnDataSource [(Bokeh 中用于绘图的一种特定类型对象)](https://bokeh.pydata.org/en/latest/docs/reference/models/sources.html)并返回绘图对象: -``` +```Python def make_plot(src): - # Blank plot with correct labels + # 带有正确标签的空白图 p = figure(plot_width = 700, plot_height = 700, title = 'Histogram of Arrival Delays by Carrier', x_axis_label = 'Delay (min)', y_axis_label = 'Proportion') - # Quad glyphs to create a histogram + # 创建柱状图的四种符号 p.quad(source = src, bottom = 0, top = 'proportion', left = 'left', right = 'right', color = 'color', fill_alpha = 0.7, hover_fill_color = 'color', legend = 'name', hover_fill_alpha = 1.0, line_color = 'black') - # Hover tool with vline mode + # vline 模式下的悬停工具 hover = HoverTool(tooltips=[('Carrier', '@name'), ('Delay', '@f_interval'), ('Proportion', '@f_proportion')], @@ -150,159 +150,159 @@ def make_plot(src): return p ``` -If we pass in a source with all airlines, this code gives us the following plot: +如果我们向所有航空公司传递一个源,此代码将给出以下绘图: ![](https://cdn-images-1.medium.com/max/800/1*-IcPPBWctsiOuh870pRbJg.png) -This histogram is very cluttered because there are 16 airlines plotted on the same graph! If we want to compare airlines, it’s nearly impossible because of the overlapping information. Luckily, we can add widgets to make the plot clearer and enable quick comparisons. +这个柱状图非常混乱,因为 16 家航空公司都绘制在同一张图上!因为信息被重叠了,所以如果我们想比较航空公司就显得不太现实。辛运的是,我们可以添加小部件来使绘制的图更清晰,也能够进行快速地比较。 -#### Creating Widget Interactions +#### 创建可交互的小部件 -Once we create a basic figure in Bokeh adding in interactions via widgets is relatively straightforward. The first widget we want is a selection box that allows viewers to select airlines to display. This control will be a check box which allows as many selections as desired and is known in Bokeh as a `CheckboxGroup.` To make the selection tool, we import the `CheckboxGroup` class and create an instance with two parameters, `labels`: the values we want displayed next to each box and `active`: the initial boxes which are checked. Here is the code to create a `CheckboxGroup` with all carriers. +一旦我们在 Bokeh 中创建一个基础图形,通过小部件添加交互就相对简单了。我们需要的第一个小部件是允许用户选择要显示的航空公司的选择框。这是一个允许根据需要进行尽可能多的选择的复选框控件,在 Bokeh 中称为T `CheckboxGroup.`。为了制作这个可选工具,我们需要导入 `CheckboxGroup` 类来创建带有两个参数的实例,`labels`:我们希望显示每个框旁边的值以及 `active`:检查选中的初始框。以下创建的 `CheckboxGroup` 代码中附有所需的运营商。 -``` +```Python from bokeh.models.widgets import CheckboxGroup -# Create the checkbox selection element, available carriers is a -# list of all airlines in the data +# 创建复选框可选元素,可用的载体是 +# 数据中所有航空公司组成的列表 carrier_selection = CheckboxGroup(labels=available_carriers, active = [0, 1]) ``` ![](https://cdn-images-1.medium.com/max/600/1*XpJfjyKacHR2VwdCIed-wA.png) -CheckboxGroup widget +CheckboxGroup 部件 -The labels in a Bokeh checkbox must be strings, while the active values are integers. This means that in the image ‘AirTran Airways Corporation’ maps to the active value of 0 and ‘Alaska Airlines Inc.’ maps to the active value of 1. When we want to match the selected checkboxes to the airlines, we need to make sure to find the _string_ names associated with the selected _integer_ active values. We can do this using the `.labels` and `.active` attributes of the widget: +Bokeh 复选框中的标签必须是字符串,但激活值需要的是整型。这意味着在在图像 ‘AirTran Airways Corporation’ 中,激活值为 0,而 ‘Alaska Airlines Inc.’ 激活值为 1。当我们想要将选中的复选框与 airlines 想匹配时,我们需要确保所选的**整型**激活值能匹配与之对应的**字符串**。我们可以使用部件的 `.labels` 和 `.active` 属性来实现。 -``` -# Select the airlines names from the selection values +```Python +# 从选择值中选择航空公司的名称 [carrier_selection.labels[i] for i in carrier_selection.active] ['AirTran Airways Corporation', 'Alaska Airlines Inc.'] ``` -After making the selection widget, we now need to link the selected airline checkboxes to the information displayed on the graph. This is accomplished using the `.on_change` method of the CheckboxGroup and an `update` function that we define. The update function always takes three arguments: `attr, old, new` and updates the plot based on the selection controls. The way we change the data displayed on the graph is by altering the data source that we passed to the glyph(s) in the `make_plot` function. That might sound a little abstract, so here’s an example of an `update` function that changes the histogram to display the selected airlines: +在制作完小部件后,我们现在需要将选中的航空公司复选框链接到图表上显示的信息中。这是使用 CheckboxGroup 的 `.on_change` 方法和我们定义的 `update` 函数完成的。update 函数总是具有三个参数:`attr、old、new`,并基于选择控件来更新绘图。改变图形上显示的数据的方式是改变我们传递给 `make_plot` 函数中的图形的数据源。这听起来可能有点抽象,因此下面是一个 `update` 函数的示例,该函数通过更改柱状图来显示选定的航空公司: -``` -# Update function takes three default parameters +```Python +# update 函数有三个默认参数 def update(attr, old, new): # Get the list of carriers for the graph - carriers_to_plot = [carrier_selection.labels[i] for i in + carriers_to_plot = [carrier_selection.labels[i] for i in carrier_selection.active] - # Make a new dataset based on the selected carriers and the - # make_dataset function defined earlier + # 根据被选中的运营商和 + # 先前定义的 make_dataset 函数来创建一个新的数据集 new_src = make_dataset(carriers_to_plot, range_start = -60, range_end = 120, bin_width = 5) - # Update the source used in the quad glpyhs + # update 在 quad glpyhs 中使用的源 src.data.update(new_src.data) ``` -Here, we are retrieving the list of airlines to display based on the selected airlines from the CheckboxGroup. This list is passed to the `make_dataset`function which returns a new column data source. We update the data of the source used in the glyphs by calling `src.data.update` and passing in the data from the new source. Finally, in order to link changes in the `carrier_selection` widget to the `update` function, we have to use the `.on_change` method (called an [event handler](https://bokeh.pydata.org/en/latest/docs/user_guide/interaction/widgets.html)). +这里,我们从 CheckboxGroup 中检索要基于选定航空公司显示的航空公司列表。这个列表被传递给 `make_dataset` 函数,它返回一个新的列数据源。我们通过调用 `src.data.update` 以及传入来自新源的数据更新图表中使用的源数据。最后,为了将 `carrier_selection` 小部件中的更改链接到 `update` 函数,我们必须使用 `.on_change` 方法(称为[事件处理器](https://bokeh.pydata.org/en/latest/docs/user_guide/interaction/widgets.html))。 -``` -# Link a change in selected buttons to the update function +```Python +# 将选定按钮中的更改链接到 update 函数 carrier_selection.on_change('active', update) ``` -This calls the update function any time a different airline is selected or unselected. The end result is that only glyphs corresponding to the selected airlines are drawn on the histogram, which can be seen below: +在选择或取消其他航班的时会调用 update 函数。最终结果是在柱状图中只绘制了与选定航空公司相对应的符号,如下所示: ![](https://cdn-images-1.medium.com/max/800/1*z36QoTv4AnbJqHLmKkLTZQ.gif) -#### More Controls +#### 更多控件 -Now that we know the basic workflow for creating a control we can add in more elements. Each time, we create the widget, write an update function to change the data displayed on the plot, and link the update function to the widget with an event handler. We can even use the same update function for multiple elements by rewriting the function to extract the values we need from the widgets. To practice, we will add two additional controls: a Slider which selects the bin width for the histogram, and a RangeSlider that sets the minimum and maximum delays to display. Here’s the code to make both of these widgets and the new update function: +现在我们已经知道了创建控件的基本工作流程,我们可以添加更多元素。我们每次创建小部件时,编写 update 函数来更改显示在绘图上的数据,通过事件处理器来将 update 函数链接到小部件。我们甚至可以通过重写函数来从多个元素中使用相同的 update 函数来从小部件中提取我们所需的值。在实践过程中,我们将添加两个额外的控件:一个用于选择柱状图容器宽度的 Slider,另一个是用于设置最小和最大延迟的 RangeSlider。下面是生成这些小部件和 update 函数的代码: -``` -# Slider to select the binwidth, value is selected number +```Python +# 滑动 bindwidth,对应的值就会被选中 binwidth_select = Slider(start = 1, end = 30, step = 1, value = 5, title = 'Delay Width (min)') -# Update the plot when the value is changed +# 当值被修改时,更新绘图 binwidth_select.on_change('value', update) -# RangeSlider to change the maximum and minimum values on histogram +# RangeSlider 用于修改柱状图上的最小最大值 range_select = RangeSlider(start = -60, end = 180, value = (-60, 120), step = 5, title = 'Delay Range (min)') -# Update the plot when the value is changed +# 当值被修改时,更新绘图 range_select.on_change('value', update) -# Update function that accounts for all 3 controls +# 用于 3 个控件的 update 函数 def update(attr, old, new): - # Find the selected carriers + # 查找选定的运营商 carriers_to_plot = [carrier_selection.labels[i] for i in carrier_selection.active] - # Change binwidth to selected value + # 修改 binwidth 为选定的值 bin_width = binwidth_select.value - # Value for the range slider is a tuple (start, end) + # 范围滑块的值是一个元组(开始,结束) range_start = range_select.value[0] range_end = range_select.value[1] - # Create new ColumnDataSource + # 创建新的列数据 new_src = make_dataset(carriers_to_plot, range_start = range_start, range_end = range_end, bin_width = bin_width) - # Update the data on the plot + # 在绘图上更新数据 src.data.update(new_src.data) ``` -The standard slider and the range slider are shown here: +标准滑块和范围滑块如下所示: ![](https://cdn-images-1.medium.com/max/800/1*QlrjWBxnHcBjHp24Xq2M3Q.png) -If we want, we can also change other aspects of the plot besides the data displayed using the update function. For example, to change the title text to match the bin width we can do: +只要我们想,出了使用 update 函数显示数据之外,我们也可以修改其他的绘图功能。例如,为了将标题文本与容器宽度匹配,我们可以这样做: -``` -# Change plot title to match selection +```Python +# 将绘图标题修改为匹配选择 bin_width = binwidth_select.value p.title.text = 'Delays with %d Minute Bin Width' % bin_width ``` -There are many other types of interactions in Bokeh, but for now, our three controls allow users plenty to “play” with on the chart! +在 Bokeh 中海油许多其他类型的交互,但现在,我们的三个控件允许运行在图标上“运行”! -### Putting it all together +### 把所有内容放在一起 -All the elements for our interactive plot are in place. We have the three necessary functions: `make_dataset`, `make_plot`, and `update` to change the plot based on the controls and the widgets themselves. We join all of these elements onto one page by defining a layout. +我们的所有交互式绘图元素都已经说完了。我们有三个必要的函数:`make_dataset`、`make_plot` 和 `update`,基于控件和系哦啊不见自身来更改绘图。我们通过定义布局将所有这些元素连接到一个页面上。 -``` +```Python from bokeh.layouts import column, row, WidgetBox from bokeh.models import Panel from bokeh.models.widgets import Tabs -# Put controls in a single element +# 将控件放在单个元素中 controls = WidgetBox(carrier_selection, binwidth_select, range_select) -# Create a row layout +# 创建行布局 layout = row(controls, p) -# Make a tab with the layout +# 使用布局来创建一个选项卡 tab = Panel(child=layout, title = 'Delay Histogram') tabs = Tabs(tabs=[tab]) ``` -I put the entire layout onto a tab, and when we make a full application, we can put each plot on a separate tab. The final result of all this work is below: +我将整个布局放在一个选项卡上,当我们创建一个完整的应用程序时,我们可以为每个绘图都创建一个单独的选项卡。最后的工作结果如下所示: ![](https://cdn-images-1.medium.com/max/800/1*5xN0M2CT1yAvpnzWM-bMhg.gif) -Feel free to check out the code and plot for yourself on [GitHub](https://github.com/WillKoehrsen/Bokeh-Python-Visualization/tree/master/interactive/exploration). +可以在 [GitHub](https://github.com/WillKoehrsen/Bokeh-Python-Visualization/tree/master/interactive/exploration) 上查看相关代码,并绘制自己的绘图。 -### Next Steps and Conclusions +### 下一步和内容 -The next part of this series will look at how we can make a complete application with multiple plots. We will be able to show our work on a server and access it in a browser, creating a full dashboard to explore the dataset. +本系列的下一部分将讨论如何使用多个绘图来制作一个完整的应用程序。我们将通过服务器来展示我们的工作结果,可以通过浏览器对其进行访问,并创建一个完整的仪表盘来探究数据集。 -We can see that the final interactive plot is much more useful than the original! We can now compare delays between airlines and change the bin widths/ranges to see how the distribution is affected. Adding interactivity raises the value of a plot because it increases engagement with the data and allows users to arrive at conclusions through their own explorations. Although setting up the initial plot was involved, we saw how we could easily add elements and control widgets to an existing figure. The customizability of plots and interactions are the benefits of using a heavier plotting library like Bokeh compared to something quick and simple like matplotlib. Different visualization libraries have different advantages and use-cases, but when we want to add the extra dimension of interaction, Bokeh is a great choice. Hopefully at this point you are confident enough to start developing your own visualizations, and please share anything you create! +我们可以看到,最终的互动绘图比原来的有用的多!我们现在可以比较航空公司之间的延迟,并更改容器的宽度/范围,来了解这些分布是如何被影响的。增加的交互性提高了绘图的价值,因为它增加了对数据的支持,并允许用户通过自己的探索得出结论。尽管设置了初始化的绘图,但我们仍然可以看到如何轻松地将元素和控件添加到现有的图形中。与像 matplotlib 这样快速简单的绘图库相比,使用更重的绘图库(比如 bokeh)可以定制化绘图和交互。不同的可视化库有不同的优点和用例,但当我们想要增加交互的额外维度时,Bokeh 是一个很好的选择。希望在这一点上,你有足够的信心来开发你自己的可视化绘图,也希望看到你可以分享自己的创作。 -I welcome feedback and constructive criticism and can be reached on Twitter [@koehrsen_will](https://twitter.com/koehrsen_will). +欢迎向我反馈以及建设性的批评,可以在 Twitter [@koehrsen_will](https://twitter.com/koehrsen_will) 上和我联系。 > 如果发现译文存在错误或其他需要改进的地方,欢迎到 [掘金翻译计划](https://github.com/xitu/gold-miner) 对译文进行修改并 PR,也可获得相应奖励积分。文章开头的 **本文永久链接** 即为本文在 GitHub 上的 MarkDown 链接。 From 03e9bdb2f440cc5f46786635940d6edb1b86aad2 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E6=B8=85=E7=A7=8B?= <1044514593@qq.com> Date: Tue, 8 Jan 2019 21:21:45 +0800 Subject: [PATCH 31/54] =?UTF-8?q?UX=20=E8=AE=BE=E8=AE=A1=E5=AE=9E=E8=B7=B5?= =?UTF-8?q?=EF=BC=9A=E5=A6=82=E4=BD=95=E8=AE=BE=E8=AE=A1=E5=8F=AF=E6=89=AB?= =?UTF-8?q?=E6=8F=8F=E7=9A=84=20Web=20=E7=95=8C=E9=9D=A2=20(#4947)?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * 初稿 * 二次修改 * 修改链接地址 * Update TODO1/ux-design-practices-how-to-make-web-interface-scannable.md Co-Authored-By: Ivocin <1044514593@qq.com> * Update TODO1/ux-design-practices-how-to-make-web-interface-scannable.md Co-Authored-By: Ivocin <1044514593@qq.com> * Update TODO1/ux-design-practices-how-to-make-web-interface-scannable.md Co-Authored-By: Ivocin <1044514593@qq.com> * Update TODO1/ux-design-practices-how-to-make-web-interface-scannable.md Co-Authored-By: Ivocin <1044514593@qq.com> * Update TODO1/ux-design-practices-how-to-make-web-interface-scannable.md Co-Authored-By: Ivocin <1044514593@qq.com> * Update TODO1/ux-design-practices-how-to-make-web-interface-scannable.md Co-Authored-By: Ivocin <1044514593@qq.com> * Update TODO1/ux-design-practices-how-to-make-web-interface-scannable.md Co-Authored-By: Ivocin <1044514593@qq.com> * Update TODO1/ux-design-practices-how-to-make-web-interface-scannable.md Co-Authored-By: Ivocin <1044514593@qq.com> * 更新译者 * 根据 Moonliujk 校对意见修改 * Update ux-design-practices-how-to-make-web-interface-scannable.md * Update ux-design-practices-how-to-make-web-interface-scannable.md * Update ux-design-practices-how-to-make-web-interface-scannable.md --- ...ces-how-to-make-web-interface-scannable.md | 132 +++++++++--------- 1 file changed, 66 insertions(+), 66 deletions(-) diff --git a/TODO1/ux-design-practices-how-to-make-web-interface-scannable.md b/TODO1/ux-design-practices-how-to-make-web-interface-scannable.md index 54504db3f40..fc3f00edc58 100644 --- a/TODO1/ux-design-practices-how-to-make-web-interface-scannable.md +++ b/TODO1/ux-design-practices-how-to-make-web-interface-scannable.md @@ -2,145 +2,145 @@ > * 原文作者:[Tubik Studio](https://uxplanet.org/@tubikstudio?source=post_header_lockup) > * 译文出自:[掘金翻译计划](https://github.com/xitu/gold-miner) > * 本文永久链接:[https://github.com/xitu/gold-miner/blob/master/TODO1/ux-design-practices-how-to-make-web-interface-scannable.md](https://github.com/xitu/gold-miner/blob/master/TODO1/ux-design-practices-how-to-make-web-interface-scannable.md) -> * 译者: -> * 校对者: +> * 译者:[Ivocin](https://github.com/Ivocin) +> * 校对者:[生糸](https://github.com/Mcskiller), [Junkai Liu](https://github.com/Moonliujk) -# UX Design Practices: How to Make Web Interface Scannable +# UX 设计实践:如何设计可扫描的 Web 界面 ![](https://cdn-images-1.medium.com/max/1000/1*F6I_CHGUZzQ6mekt2H2C8A.png) -Day by day we are overwhelmed with massive information flow both offline and online. Due to new technologies and fast internet connection, people can produce more content than they are physically able to consume. Dealing with numerous websites and apps, users don’t read everything they see word by word — they first scan the page to find out why and how it’s useful for them. So, scannability is one of the essential factors of website usability today. Today’s article explores the phenomenon and gives tips on how to make the digital product scannable. +我们每天被大量的线上或线下的信息流压的不堪重负。由于新技术的发展和快速的互联网连接,人们生成的内容比他们能够接受的更多。面对众多网站和应用程序时,用户不会逐字逐句地阅读所有内容 —— 他们会首先扫描页面,看一下这些内容对他们是否有用。因此,可扫描性是当今网站可用性的重要因素之一。本文探究了这一现象,并且提供了如何使数字产品可扫描的技巧。 ![](https://cdn-images-1.medium.com/max/1000/1*93f_FurS9JjwZS6lXwJDow.png) -### What Is Scannability? +### 什么是可扫描性? -Applied to a page or screen, the verb “scan” means to glance at/over or read hastily. So, scannability is the way to present the content and navigation elements as the layout that can be scanned easily. Interacting with a website, especially the first time, users quickly look through the content to analyze whether it’s what they need. Any piece of the content may become a hook in this process: words, sentences, images, or animations. +对于页面或屏幕,动词“扫描”意味着匆匆一瞥或匆匆阅读。因此,可扫描性是将内容和导航元素呈现为可被轻松扫描的布局的方式。尤其是首次与网站交互时,用户一般都是快速查看内容,然后分析这些内容是不是他们所需要的。任何以下内容都可能成为这个过程的一个障碍:单词、句子、图像或动画。 -By the way, this behavior is nothing new: for many decades, people often do the same with a new magazine or newspaper looking through them before they start attentive reading of the articles. What’s more, reading from the screen is much more tiring than on paper, so users are more selective when and where they are ready to bother. +顺便说一句,这种行为并不是什么新鲜事。几十年来,人们经常在新的杂志或报纸上做着相同的事情:在开始仔细阅读文章之前先浏览一遍。另外,从屏幕上阅读比在纸上阅读更累,因此用户会更具选择性地阅读,当他们开始厌烦的时候就会放弃阅读。 -Why is that important? About a decade ago [Jacob Nielsen](https://www.nngroup.com/articles/how-users-read-on-the-web/) answered the question “How people read on the Web?” simply: “They don’t. People rarely read Web pages word by word; instead, they scan the page, picking out individual words and sentences”. Since then it hasn’t changed much: we aren’t ready to invest our time and effort into exploring the website if we aren’t sure it corresponds to our needs. So, if an eye has nothing to be caught with at the first minutes of introduction, the risk is high that the user will go away. Whatever is the type of a website, scannability is one of the significant factors of its user-friendly nature. +为什么可扫描性很重要?大约十年前,[Jacob Nielsen](https://www.nngroup.com/articles/how-users-read-on-the-web/) 回答了“人们如何在网上阅读?”的问题。他的回答非常简单:“他们没有。人们很少逐字阅读网页;相反,他们扫描页面,挑选个别的单词和句子阅读”。从那时起没有太大变化的是:当我们不确定一个网站是否满足我们的需求时,我们不太会花时间和精力去浏览它。因此,如果没有在第一分钟抓住用户的眼球,那么用户离开网页的风险会很高。无论网站的类型是什么,可扫描性都是其用户友好性的重要因素之一。 -How can you check if the webpage is scannable? Try to look at it as a first-time user and answer two questions: +如何检查网页是否可扫描?可以尝试将自己视为首次使用者并回答如下两个问题: -_– Does what you see in the first couple of minutes correspond to what target audience expects from this page?_ + - **你在前几分钟看到的内容是否符合目标受众对此页面的期望?** -_– Can you understand what kind of information is on the page for the first minute or two?_ +- **你能在前两分钟了解页面上的信息类型吗?** -If you aren’t sure that both answers are positive, perhaps it’s time to think how to strengthen the website scannability. It’s worth investing time because well-scanned pages become much more efficient in the following aspects: +如果这两个答案不都是正面的,也许是时候考虑如何加强网站的可扫描性了。加强网站可扫描性是值得投入时间的,因为扫描性好的页面在以下方面会变得更加高效: -* users complete their tasks and achieve their goals quicker -* users make fewer mistakes in the search of content they need -* users understand the structure and navigation of the website faster -* the bounce rate is reduced -* the level of retaining users gets higher -* the website looks and feels more credible -* the SEO rates are affected positively. +* 用户更快速地完成任务并实现目标 +* 用户在搜索他们需要的内容时会更少出错 +* 用户可以更快地了解网站的结构和导航 +* 跳出率降低 +* 保留用户的水平越来越高 +* 网站看起来更可信 +* SEO 率受到积极影响 ![](https://cdn-images-1.medium.com/max/1000/1*jUY-rctiYdE64lhAmouZlw.png) -### Popular Scanning Patterns +### 流行的扫描模式 -The vital thing which interface designer has to consider is eye-scanning patterns that show how users interact with a webpage in the first seconds. When you understand HOW people scan the page or screen, you may prioritize the content and put WHAT users need into the most visible zones. This domain of [user research](https://uxplanet.org/user-research-empathy-is-the-best-ux-policy-5f966ba5bbdc) is supported by [Nielsen Norman Group](https://www.nngroup.com/articles/eyetracking-study-of-web-readers/) and provides designers and usability specialists with the better understanding of user behavior and interactions. +界面设计师必须考虑的重要事项是眼睛扫描模式,它表明用户在最初的几秒内与网页交互的方式。当你了解了人们如何扫描页面或屏幕时,就可以将内容进行优先级排序,并将用户需要的内容放入最明显的区域。这个[用户研究](https://uxplanet.org/user-research-empathy-is-the-best-ux-policy-5f966ba5bbdc)领域得到了 [Nielsen Norman 集团](https://www.nngroup.com/articles/eyetracking-study-of-web-readers/)的支持,帮助设计师和可用性专家更好地理解用户行为和交互。 -Different experiments collecting data on user eye-tracking have shown that there are several typical models along which visitors usually scan the website. +收集用户眼动追踪数据的不同实验表明,通常访客扫描网站会使用几种典型的模型。 ![](https://cdn-images-1.medium.com/max/800/0*XhTRNfV97UzNppny.png) -**Z-Pattern** is quite typical for the web pages with the uniform presentation of information and weak visual hierarchy. +**Z 模式** 对于具有统一信息呈现和弱视觉层次的网页而言是非常典型的。 ![](https://cdn-images-1.medium.com/max/800/0*hLPvt_yft0P_ZT_2.png) -Another scheme features **zig-zag pattern** typical for pages with visually divided content blocks. Again, the reader’s eyes go left to right starting from the upper left corner and moving across all the page to the upper right corner scanning the information in this initial zone of interaction. +另一种模式具有 **Z 字形图案**,该模式通常用于具有视觉上分割内容块的页面。同样,读者的眼睛从左上角开始从左到右移动,并在整个页面上移动到右上角,扫描这个初始交互区域中的信息。 ![](https://cdn-images-1.medium.com/max/800/0*wNMOr8uiYFLMGAb_.jpg) -One more model is **F-pattern** presented in the explorations by [Nielsen Norman Group](https://www.nngroup.com/articles/f-shaped-pattern-reading-web-content/) and showing that users often demonstrate the following flow of interaction: +另一个模型是 [Nielsen Norman 集团](https://www.nngroup.com/articles/f-shaped-pattern-reading-web-content/)探索发现的 **F 模式**,表明用户经常会经历以下交互流程: -* Users first read in a horizontal movement, usually across the upper part of the content area. This initial element forms the F’s top bar. -* Next, users move down the page a bit and then read across in a second horizontal movement that typically covers a shorter area than the previous movement. This additional element forms the F’s lower bar. -* Finally, users scan the content’s left side in a vertical movement. Sometimes this is a fairly slow and systematic scan that appears as a solid stripe on an eye-tracking heatmap. Other times users move faster, creating a spottier heatmap. This last element forms the F’s stem. +* 用户首先水平移动阅读,通常跨越内容区域的上部。这个初始元素构成了 F 的顶部栏。 +* 接下来,用户稍微向下移动页面,然后在第二个水平移动中读取,该移动通常覆盖比先前移动更短的区域。这个额外的元素形成了 F 的下栏。 +* 最后,用户以垂直移动扫描内容的左侧。有时这是一个相当缓慢和系统的过程,在眼动追踪热图上显示为实心条纹。有时用户扫描得更快,会创建一个带有斑点的热力图。最后构成了字母 F 的主干。 -### Tips on Improving Scannability +### 提高可扫描性的技巧 -#### 1. Prioritize the content with visual hierarchy +#### 1. 使用视觉层次对内容进行优先级排序 -Basically, [visual hierarchy](https://tubikstudio.com/9-effective-tips-on-visual-hierarchy/) is the way to arrange and organize the content on the page in the way which is the most natural for human perception. The main goal behind it is to let users understand the importance level of each piece of content. So, if the visual hierarchy is applied, the users will see the key content first. +基本上,[视觉层次](https://tubikstudio.com/9-effective-tips-on-visual-hierarchy/)是按照人类感知最自然的方式,在页面上排列和组织内容的方式。其背后的主要目标是让用户了解每块内容的重要性级别。因此,如果应用了视觉层次,用户将会首先看到关键内容。 -For example, when we see the article in the blog, we’ll get the headline first, then subheadings and only then copy blocks. Does it mean that the information in the copy blocks has the low level of importance? Well, no, but this way users will be able to scan the headline and subheadings to understand if the article is useful and interesting for them instead of trying to read all the text. And if the headline and subheadings are done properly and inform the user about the structure and contents of the article, this will be the factor convincing to read more. On the other hand, if users see the huge and long sheet of text not separated into chunks, they will be literally scared, not able to understand how long it will take to read this article and if it is worth investing their time and effort. +例如,当我们在博客中阅读文章时,我们首先会看到标题,然后是副标题,然后才是副本块。这是否意味着副本块中的信息不重要?其实不是这样,但通过这种方式用户就可以扫描标题和副标题,以了解文章是否对他们有吸引力,而不用阅读全文。如果标题和副标题起的恰当,它们能够告知用户文章的结构和内容,这会是说服用户去阅读更多的因素。另一方面,如果用户看到又大又长的没有分块的文本,他们会感到很害怕,因为无法得知阅读这篇文章需要多长时间,以及是否值得投入时间和精力。 -There are several main factors helping to build up the visual hierarchy: +有助于建立视觉层次的几个主要因素: -* size -* color -* contrast -* proximity -* negative space -* repetition. +* 尺寸 +* 颜色 +* 对比 +* 相近性 +* 负空间 +* 重复 -All of them help designers transform the set of elements, links, images and copy into the harmonic scannable system of the page layout. +所有这些都有助于设计人员将元素、链接、图像和副本集转换为由该页面布局组成的可扫描系统。 -#### 2. Put the core navigation into the website header +#### 2. 将核心导航放入网站头部 -All the mentioned eye-scanning patterns show that whichever of them a particular user follows, the scanning process will start in the top horizontal area of the webpage. Using it for showing the key zones of interaction and branding is a strategy supporting both sides. That is the basic reason why [website header design](https://uxplanet.org/best-practices-for-website-header-design-e0d55bf5f1e2) is considered as an essential issue by not only UI/UX designers but also content managers and marketing specialists. +所有上文提到的眼动扫描模式都显示,无论特定用户遵循哪种模式,扫描过程都会从网页的顶部水平区域开始。用它来展示交互和品牌的关键区域效果非常好。这也是 UI / UX 设计师、内容管理者和营销专家都认为[网站头部设计](https://uxplanet.org/best-practices-for-website-header-design-e0d55bf5f1e2)是一个关键点的原因。 -On the other hand, the header shouldn’t be overloaded: too much information makes it impossible to focus. The attempt to put everything into the top part of the page can transform the layout into the mess. So, in every particular case, it’s a must to analyze the goals of the core target audience, how they cross with the business goals behind the website and based on that — what information or navigation should be put into header as the most important. For example, if it’s a big e-commerce website, search functionality has to be instantly visible and is often found in the header to be accessible from any point of interaction. Whereas for the small corporate website, search functionality can be unnecessary at all but the immediately seen link to the portfolio will be crucial. +另一方面,标题不应该过长:太多的信息使得无法集中注意力。将所有内容放入页面顶部的尝试会将布局变得混乱不堪。因此,在每个特定情况下,必须分析核心目标受众的目标,他们如何与网站背后的业务目标交叉,并以此为基础 —— 哪些信息或导航应该作为最重要头部内容。例如,如果是大型电商网站,搜索功能必须立即可见,并且通常可以在头部找到,并能从任何交互点访问到。对于小型企业网站而言,搜索功能根本不需要,但是直接看到的投资组合的链接是至关重要的。 ![](https://cdn-images-1.medium.com/max/800/0*3w2BkBHrjlTYVgTw.gif) -[**The Gourmet Website**](https://dribbble.com/shots/3858039-The-Gourmet-Website-Interactions) +[**Gourmet 网站**](https://dribbble.com/shots/3858039-The-Gourmet-Website-Interactions) -#### 3. Keep the balance of negative space +#### 3. 保持负空间的平衡 -Negative space — or white space, as it’s often called — is the area of the layout which is left empty, not only around the objects in the layout but also between and inside them. [Negative space](https://tubikstudio.com/negative-space-in-design-tips-and-best-practices/) is a kind of breathing room for all the objects on the page or screen. It defines the limits of objects, creates the necessary bonds between them according to [Gestalt principles](https://uxplanet.org/gestalt-theory-for-ux-design-principle-of-proximity-e56b136d52d1) and builds up effective visual performance. In UI design for websites and mobile apps, negative space is a big factor of high [navigability](https://uxplanet.org/ui-ux-design-glossary-navigation-elements-b552130711c8) of the interface: without enough air, layout elements aren’t properly seen so users risk missing what they really need. It may be a strong reason for eye and brain tense although many users won’t be able to formulate the problem. A proper amount of negative space, especially micro space, solves it and makes the process more natural. +负空间 —— 或者通常称为空白区域 —— 是布局里的空白区域,不仅在布局中的对象周围,而且在它们之间和内部。[负空间](https://tubikstudio.com/negative-space-in-design-tips-and-best-practices/)是页面或屏幕上所有对象的一种呼吸空间。它定义了对象的界限,根据 [Gestalt 原则](https://uxplanet.org/gestalt-theory-for-ux-design-principle-of-proximity-e56b136d52d1)在它们之间创造了必要的联系,并建立了有效的视觉表现。在网站和移动应用程序的 UI 设计中,负空间是界面高[可导航性](https://uxplanet.org/ui-ux-design-glossary-navigation-elements-b552130711c8)的一个重要因素:没有足够的空气,布局元素没有被正确看到,因此用户可能会错过他们真正需要的东西。这可能是眼睛和大脑紧张的一个强有力的原因,尽管许多用户将无法明确表述这个问题。适量的负空间,特别是微空间,解决这个问题,并且使过程更自然。 -#### 4. Check that CTA is seen at once +#### 4. 检查能否立即看到 CTA -Obviously, the vast majority of web pages are aimed at particular actions which users have to complete. The elements that contain the call to action (CTA), usually [buttons](https://uxplanet.org/ux-practices-8-handy-tips-on-cta-button-design-682fdb9c65bc), should be seen in split seconds to let users understand what actions they can do on this page. Among the good tests is checking the page in the black-and-white and blurred modes. If in both cases you can distinguish CTA elements quickly, they are done well. For example, on the webpage of the [bakery website](https://uxplanet.org/case-study-vinnys-bakery-ui-design-for-e-commerce-2ffe7fae3600) shown below the CTA button of adding the item to the list is easily seen among the other elements. +显然,绝大多数网页目的在于用户必须完成的特定操作。包含号召性用语(CTA)的元素(通常是[按钮](https://uxplanet.org/ux-practices-8-handy-tips-on-cta-button-design-682fdb9c65bc))应在几秒钟内显示,以便用户了解他们可以在此页面上执行的操作。 在黑白和模糊模式下检查页面可以很好地测试这一点。如果在这两种情况下都可以快速区分 CTA 元素,说明这一点做的不错。例如,在下面显示的[面包店网站](https://uxplanet.org/case-study-vinnys-bakery-ui-design-for-e-commerce-2ffe7fae3600)的网页上,可以很容易地在其他元素中看到将物品添加到列表中的 CTA 按钮。 ![](https://cdn-images-1.medium.com/max/800/0*RI-R_E56dkdJ1DeN.png) -[**Vinny’s Bakery Website**](https://uxplanet.org/case-study-vinnys-bakery-ui-design-for-e-commerce-2ffe7fae3600) +[**Vinny’s 的面包店网站**](https://uxplanet.org/case-study-vinnys-bakery-ui-design-for-e-commerce-2ffe7fae3600) -#### 5. Test the readability of copy content +#### 5. 测试副本内容的可读性 -Readability defines how easy people can read words, phrases, and blocks of copy. Legibility measures how quickly and intuitively users can distinguish the letters in a particular typeface. These characteristics should be carefully considered, especially for the interfaces filled with a lot of text. The [color of the background](https://uxplanet.org/light-or-dark-ui-tips-to-choose-a-proper-color-scheme-for-user-interface-9a12004bb79e), amount of space around copy blocks, kerning, leading, type of font and font pairing — all these factors influence the ability to quickly scan the text and catch the content convincing users to stay. To prevent the problem, designers have to check if the [typography](https://uxplanet.org/typography-in-ui-guide-for-beginners-7ee9bdbc4833) laws are followed and whether the chosen fonts support general visual hierarchy and readability. [User testing](https://uxplanet.org/tests-go-first-usability-testing-in-design-574ffa18d81) will help to check how quickly and easily users are able to perceive the text. +可读性定义了人们阅读单词,短语和副本块的容易程度。易读性衡量用户如何快速直观地区分特定字体中的字母。应该仔细考虑这些特性,尤其是对于填充了大量文本的界面。[背景色](https://uxplanet.org/light-or-dark-ui-tips-to-choose-a-proper-color-scheme-for-user-interface-9a12004bb79e)、副本块周围的空间量、字距,行距、字体类型和字体配对 —— 所有这些因素都会影响快速扫描文本和捕获令用户留下的内容的能力。为了防止这个问题,设计人员必须检查是否遵循[排版](https://uxplanet.org/typography-in-ui-guide-for-beginners-7ee9bdbc4833)规则以及所选字体是否支持一般的视觉层次和可读性。[用户测试](https://uxplanet.org/tests-go-first-usability-testing-in-design-574ffa18d81)将有助于检查用户能够快速轻松地感知文本。 -#### 6. Apply numbers, not words +#### 6. 使用数字,而不是单词 -This piece of advice is based on another investigation by [Nielsen Norman Group](https://www.nngroup.com/articles/web-writing-show-numbers-as-numerals/). They shared an important finding: eye-tracking studies showed that in the process of scanning web pages, numerals often stop the wandering user’s eye and attract fixations, even embedded in a mass of words that would be ignored without numbers. We subconsciously associate numbers with facts, stats, sizes and distance — data which is potentially useful. So numbers included in copy catch reader’s attention while words representing numerals can be missed in the bulk of copy. What’s more, numbers are more compact than the textual numeral, so it makes the content more concise and time-saving for scanning. +这条建议是基于 [Nielsen Norman 集团](https://www.nngroup.com/articles/web-writing-show-numbers-as-numerals/)的另一项调查。他们分享了一个重要的发现:眼动追踪研究表明,在扫描网页的过程中,数字通常会阻止用户徘徊并吸引注视,相反大量可以没有数字的单词会被用户忽略。我们潜意识地将数字与事实、统计数据、大小和距离相关联 —— 这些数据可能是有用的。因此,副本中的数字可以吸引用户注意,而代表数字的单词可能在大量副本中被遗漏。更重要的是,数字比文本数字更紧凑,因此它使内容更简洁,更省时。 -#### 7. Place one idea in one paragraph +#### 7. 一个段落,一个想法 -Processing the copy content in the aspect of scannability, try not to make the bulks of text too long. Short paragraphs look more digestible and can be easier skipped in case the information is not valuable for the reader. So, follow the rule when you present one idea in one paragraph and start another one for a new thought. +在可扫描性方面处理副本内容,尽量不要使文本的内容太长。简短的段落看起来更容易消化,如果信息对读者没有价值,可以更容易跳过。因此,当你在一个段落中提出一个想法并为另一个段落开始另一个想法时,请遵循该规则。 ![](https://cdn-images-1.medium.com/max/800/0*fuMEd3aJ3gviUZZP.gif) -[**Bjorn Design Studio Website**](https://dribbble.com/shots/2680255-Tubik-Studio-Bjo-rn) +[**Bjorn 设计工作室网站**](https://dribbble.com/shots/2680255-Tubik-Studio-Bjo-rn) -#### 8. Use numbered and bulleted lists +#### 8. 使用编号和项目符号列表 -One more good trick to make the text more scannable is using lists with numbers or bullets. They help to organize data clearly. Also, they catch user’s eye so the information won’t get lost in the general body of text. +使文本更易于扫描的另一个好方法是使用带有数字或项目符号的列表。它们有助于清晰地组织数据。此外,它们会引起用户的注意,因此信息不会在文本主体中丢失。 -#### 9. Highlight the key information in the text +#### 9. 突出显示文本中的关键信息 -Good old bold, italics and color highlighting are old school but they still work successfully. This way you may attract attention to the significant idea, definition, quote or other type of specific data included right into the paragraph. What’s more, the **clickable part of the text (links to other pages) must be visually marked**. We are used to seeing them underlined, still highlighting them additionally with color or bolder font is even more effective. +加粗、斜体和颜色高亮显示虽然老派,但仍然有效。通过这种方式,你可以将注意力集中在段落中包含的重要想法、定义、引用或其他类型的特定数据上。更重要的是,**文本的可点击部分(链接)必须在视觉上标注出来**。我们习惯于看到它们加下划线、但使用颜色高亮或加粗字体会更有效。 -#### 10. Use images and illustrations +#### 10. 使用图像和插图 -In web user interface design, [images](https://uxplanet.org/3c-of-interface-design-color-contrast-content-235b68fbd9a1) are highly supportive in setting the mood or transferring the message. They are the content which is both informative and emotionally appealing. Original illustration, prominent hero banners, engaging photos can easily catch users’ attention and support the general stylistic concept. What’s more, they play a big role in building visual hierarchy and make the copy content more digestible in combination with illustrations or photos. People perceive images faster than words which is an important factor for increased scannability. +在 Web 用户界面设计中,[图像](https://uxplanet.org/3c-of-interface-design-color-contrast-content-235b68fbd9a1)在表达情绪或传递消息方面是非常有帮助的,它们饱含信息和吸引力。原始插图,突出的英雄横幅,引人入胜的照片可以很容易地吸引用户的注意力,并支持一般的风格概念。更重要的是,它们在构建视觉层次方面发挥了重要作用,并使副本内容与插图或照片相结合,更容易消化。人们感知图像比理解文字更快,这是提高可扫描性的重要因素。 ![](https://cdn-images-1.medium.com/max/800/0*cTIMfaqYBRGeppEn.png) -[**Financial Service Website**](https://dribbble.com/shots/3905908-Financial-Service-Website) +[**金融服务网站**](https://dribbble.com/shots/3905908-Financial-Service-Website) -Improving scannability of the web pages, designers and content creators show real respect to website users. This way we save users’ time and effort providing them with organized, harmonic, valuable and attractive content. +提高网页的可扫描性,是设计人员和内容创建者对网站用户的真正尊重。这样我们就可以节省用户的时间和精力,为他们提供有组织,和谐的,有价值和有吸引力的内容。 * * * -**_Originally written for_** [**_tubikstudio.com_**](https://tubikstudio.com/) +**最初为 [tubikstudio.com](https://tubikstudio.com/) 而写** -_Welcome to see the designs by Tubik Studio on_ [**_Dribbbl_**](https://dribbble.com/Tubik) and [**_Behance_**](https://www.behance.net/Tubik) +**欢迎到 [Dribbble](https://dribbble.com/Tubik) 和 [Behance](https://www.behance.net/Tubik) 观看 Tubik Studio 的设计。** > 如果发现译文存在错误或其他需要改进的地方,欢迎到 [掘金翻译计划](https://github.com/xitu/gold-miner) 对译文进行修改并 PR,也可获得相应奖励积分。文章开头的 **本文永久链接** 即为本文在 GitHub 上的 MarkDown 链接。 From 0fa33f54edb566d5e64390322ec7fd81f45dbf75 Mon Sep 17 00:00:00 2001 From: icy Date: Tue, 8 Jan 2019 21:47:13 +0800 Subject: [PATCH 32/54] =?UTF-8?q?Transducers:=20JavaScript=20=E4=B8=AD?= =?UTF-8?q?=E9=AB=98=E6=95=88=E7=9A=84=E6=95=B0=E6=8D=AE=E5=A4=84=E7=90=86?= =?UTF-8?q?=20pipeline=20(#4950)?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * translation complete Change-Id: Ie3b30dd3203c2ca2f94736fed7c2fb1e8a69bad5 * Update transducers-efficient-data-processing-pipelines-in-javascript.md * Update transducers-efficient-data-processing-pipelines-in-javascript.md * Update transducers-efficient-data-processing-pipelines-in-javascript.md --- ...data-processing-pipelines-in-javascript.md | 371 +++++++++--------- 1 file changed, 183 insertions(+), 188 deletions(-) diff --git a/TODO1/transducers-efficient-data-processing-pipelines-in-javascript.md b/TODO1/transducers-efficient-data-processing-pipelines-in-javascript.md index d3c7a341c8b..b84484ed488 100644 --- a/TODO1/transducers-efficient-data-processing-pipelines-in-javascript.md +++ b/TODO1/transducers-efficient-data-processing-pipelines-in-javascript.md @@ -2,72 +2,69 @@ > * 原文作者:[Eric Elliott](https://medium.com/@_ericelliott?source=post_header_lockup) > * 译文出自:[掘金翻译计划](https://github.com/xitu/gold-miner) > * 本文永久链接:[https://github.com/xitu/gold-miner/blob/master/TODO1/transducers-efficient-data-processing-pipelines-in-javascript.md](https://github.com/xitu/gold-miner/blob/master/TODO1/transducers-efficient-data-processing-pipelines-in-javascript.md) -> * 译者: -> * 校对者: +> * 译者:[Raoul1996](https://github.com/Raoul1996) +> * 校对者:[ElizurHz](https://github.com/ElizurHz), [Yangfan2016](https://github.com/Yangfan2016) -# Transducers: Efficient Data Processing Pipelines in JavaScript +# Transducers:JavaScript 中高效的数据处理 Pipeline -![](https://cdn-images-1.medium.com/max/2000/1*uVpU7iruzXafhU2VLeH4lw.jpeg) +![Smoke Art Cubes to Smoke](https://user-gold-cdn.xitu.io/2019/1/8/1682919845dd2203?w=2000&h=910&f=jpeg&s=120724) -Smoke Art Cubes to Smoke — MattysFlicks — (CC BY 2.0) +Smoke Art Cubes to Smoke — MattysFlicks — (CC BY 2.0) -section-inner sectionLayout--insetColumn"> +> 注意:这是从头开始学 JavaScript ES6+ 中的函数式编程和组合软件技术中 “撰写软件” 系列的一部分。敬请关注,我们会讲述大量关于这方面的知识! +> [< 上一篇](https://github.com/xitu/gold-miner/blob/master/TODO1/curry-and-function-composition.md) | [<< 从第一篇开始](https://github.com/xitu/gold-miner/blob/master/TODO1/composing-software-an-introduction.md) -> Note: This is part of the “Composing Software” series on learning functional programming and compositional software techniques in JavaScript ES6+ from the ground up. Stay tuned. There’s a lot more of this to come! -> [< Previous](https://github.com/xitu/gold-miner/blob/master/TODO1/curry-and-function-composition.md) | [<< Start over at Part 1](https://github.com/xitu/gold-miner/blob/master/TODO1/composing-software-an-introduction.md) +在使用 transducer 之前,你首先要完全搞懂[**复合函数(function composition)**](https://juejin.im/post/5c0dd214518825444758453a)和 [**reducers**](https://github.com/xitu/gold-miner/blob/master/TODO1/reduce-composing-software.md) 是什么。 -Prior to taking on transducers, you should first have a strong understanding of both [**function composition**](https://github.com/xitu/gold-miner/blob/master/TODO1/composing-software-an-introduction.md) and [**reducers**](https://github.com/xitu/gold-miner/blob/master/TODO1/reduce-composing-software.md)**.** +> Transduce:源于 17 世纪的科学术语(latin name 一般指学名)“transductionem”,意为“改变、转换”。它更早衍生自“transducere/traducere”,意思是“引导或者跨越、转移”。 -> Transduce: Derived from the 17th century scientific latin, “transductionem” means “to change over, convert”. It is further derived from “transducere/traducere”, which means “to lead along or across, transfer”. +一个 transducer 是一个可组合的高阶 reducer。以一个 reducer 作为输入,返回另外一个 reducer。 -A transducer is a composable higher-order reducer. It takes a reducer as input, and returns another reducer. +Transducers 是: -Transducers are: +* 可组合使用的简单功能集合 +* 对大型集合或者无限流有效:不管 pipeline 中的操作数量有多少,都只对单一元素进行一次枚举。 +* 能够转换任何可枚举的源(例如,数组、树、流、图等...) +* 无需更换 transducer pipeline,即可用于惰性或热切求值(译者注:[求值策略](https://zh.wikipedia.org/wiki/%E6%B1%82%E5%80%BC%E7%AD%96%E7%95%A5))。 -* Composable using simple function composition -* Efficient for large collections or infinite streams: Only enumerates over the elements once, regardless of the number of operations in the pipeline -* Able to transduce over any enumerable source (e.g., arrays, trees, streams, graphs, etc…) -* Usable for either lazy or eager evaluation with no changes to the transducer pipeline +Reducer 将多个输入 **折叠(fold)** 成单个输出,其中“折叠”可以用几乎任何产生单个输出的二进制操作替换,例如: -Reducers _fold_ multiple inputs into single outputs, where “fold” can be replaced with virtually any binary operation that produces a single output, such as: - -``` -// Sums: (1, 2) = 3 +```js +// 求和: (1, 2) = 3 const add = (a, c) => a + c; -// Products: (2, 4) = 8 +// 求乘积: (2, 4) = 8 const multiply = (a, c) => a * c; -// String concatenation: ('abc', '123') = 'abc123' +// 字符串拼接: ('abc', '123') = 'abc123' const concatString = (a, c) => a + c; -// Array concatenation: ([1,2], [3,4]) = [1, 2, 3, 4] +// 数组拼接: ([1,2], [3,4]) = [1, 2, 3, 4] const concatArray = (a, c) => [...a, ...c]; ``` +Transducer 做了很多相同的事情,但是和普通的 reducer 不同,transducer 可以使用正常地组合函数组合。换句话说,你可以组合任意数量的 tranducer,组成一个将每个 transducer 组件串联在一起的新 transducer。 -Transducers do much the same thing, but unlike ordinary reducers, transducers are composable using normal function composition. In other words, you can combine any number of transducers to form a new transducer which links each component transducer together in series. - -Normal reducers can’t compose, because they expect two arguments, and only return a single output value, so you can’t simply connect the output to the input of the next reducer in the series. The types don’t line up: +普通的 reducer 不能这样(组合)。因为它需要两个参数,只返回一个输出值。所以你不能简单地将输出连接到串联中下一个 reducer 的输入。这样会出现类型不符合的情况: -``` +```js f: (a, c) => a g: (a, c) => a h: ??? ``` -Transducers have a different signature: +Transducers 有着不同的签名: -``` +```js f: reducer => reducer g: reducer => reducer h: reducer => reducer ``` -### Why Transducers? +### 为什么选择 Transducer? -Often, when we process data, it’s useful to break up the processing into multiple independent, composable stages. For example, it’s very common to select some data from a larger set, and then process that data. You may be tempted to do something like this: +通常,处理数据时,将处理分解成多个独立的可组合阶段很有用。例如,从较大的集合中选择一些数据然后处理该数据非常常见。你可能会这么做: -``` +```js const friends = [ { id: 1, name: 'Sting', nearMe: true }, { id: 2, name: 'Radiohead', nearMe: true }, @@ -88,17 +85,17 @@ console.log(results); // => ["Sting", "Radiohead", "Echo"] ``` -This is fine for small lists like this, but there are some potential problems: +这对于像这样的小型列表来说很好,但是存在一些潜在的问题: -1. This only works for arrays. What about potentially infinite streams of data coming in from a network subscription, or a social graph with friends-of-friends? +1. 这仅仅只适用于数组。对于那些来自网络订阅的潜在无限数据流,或者朋友的朋友的社交图如何处理呢? -2. Each time you use the dot chaining syntax on an array, JavaScript builds up a whole new intermediate array before moving onto the next operation in the chain. If you have a list of 2,000,000 “friends” to wade through, that could slow things down by an order of magnitude or two. With transducers, you can stream each friend through the complete pipeline without building up intermediate collections between them, saving lots of time and memory churn. +2. 每次在数组上使用点链语法(dot chaining syntax)时,JavaScript 都会构建一个全新的中间数组,然后再转到链中的下一个操作。如果你有一个 2,000,000 名“朋友”的名单,这可能会使数据处理减慢一两个数量级。使用 transducer,你可以通过完整的 pipeline 流式传输每个朋友,而无需在它们之间建立中间集合,从而节省大量时间和内存。 -3. With dot chaining, you have to build different implementations of standard operations, like `.filter()`, `.map()`, `.reduce()`, `.concat()`, and so on. The array methods are built into JavaScript, but what if you want to build a custom data type and support a bunch of standard operations without writing them all from scratch? Transducers can potentially work with any transport data type: Write an operator once, use it anywhere that supports transducers. +3. 使用点链,你必须构建标准操作的不同实现。如 `.filter()`、`.map()`、`.reduce()`、`.concat()` 等。数组方法内置在 JavaScript 中,但是如果你想构建自定义数据类型并支持一堆标准操作而且还不需要重头进行编写,改怎么办?Transducer 可以使用任何传输数据类型:编写一次操作符,在支持 transducer 的任何地方使用它。 -Let’s see what this would look like with transducers. This code won’t work yet, but follow along, and you’ll be able to build every piece of this transducer pipeline yourself: +让我们看看 transducer。这段代码还不能工作,但是还请继续,你将能够自己构建这个 transducer pipeline 的每一部分: -``` +```js const friends = [ { id: 1, name: 'Sting', nearMe: true }, { id: 2, name: 'Radiohead', nearMe: true }, @@ -119,80 +116,80 @@ const getFriendsNearMe = compose( const results2 = toArray(getFriendsNearMe, friends); ``` -Transducers don’t do anything until you tell them to start and feed them some data to process, which is why we need `toArray()`. It supplies the transducible process and tells the transducer to transduce the results into a new array. You could tell it to transduce to a stream, or an observable, or anything you like, instead of calling `toArray()`. +在你告诉他们开始并向他们提供一些数据进行处理之前,transducer 不会做任何事情。这就是我们为什么需要使用 `toArray()`。他提供传导过程并告诉 transducer 将结果转换成新数组。你可以告诉它转换一个流、一个 observable,或者任何你喜欢的东西,而不仅仅只是调用 `toArray()`。 -A transducer could map numbers to strings, or objects to arrays, or arrays to smaller arrays, or not change anything at all, mapping `{ x, y, z } -> { x, y, z }`. Transducers may also filter parts of the signal out of the stream `{ x, y, z } -> { x, y }`, or even generate new values to insert into the output stream, `{ x, y, z } -> { x, xx, y, yy, z, zz }`. +Transducer 可以将数字映射(mapping)成字符串,或者将对象映射到数组,或者将数组映射成更小的数组,或者根本不做任何改变,映射 `{ x, y, z } -> { x, y, z }`。Transducer 可以过滤流中的部分信号 `{ x, y, z } -> { x, y }`,甚至可以生成新值插入到输出流中,`{ x, y, z } -> { x, xx, y, yy, z, zz }`。 -I will use the words “signal” and “stream” somewhat interchangeably in this section. Keep in mind when I say “stream”, I’m not referring to any specific data type: simply a sequence of zero or more values, or _a list of values expressed over time._ +我将在本节中使用“信号(signal)”和“流(stream)”等词语。请记住,当我说“流”时,我并不是指任何特定的数据类型:只是一个有零个或者多个值的序列,或者**随时间表达的值列表。** -### Background and Etymology +### 背景和词源 -In hardware signal processing systems, a transducer is a device which converts one form of energy to another, e.g., audio waves to electrical, as in a microphone transducer. In other words, it transforms one kind of signal into another kind of signal. Likewise, a transducer in code converts from one signal to another signal. +在硬件信号处理系统中,transducer(换能器)是将一种形式的能量转换成另一种形式的装置。例如,麦克风换能器将音频波转换为电能。换句话说,它将一种信号转换成为另一种信号。同样,代码中的 transducer 将一个信号转换成另一个信号。 -Use of the word “transducers” and the general concept of composable pipelines of data transformations in software date back at least to the 1960s, but our ideas about how they should work have changed from one language and context to the next. Many software engineers in the early days of computer science were also electrical engineers. The general study of computer science in those days often dealt both with hardware and software design. Hence, thinking of computational processes as “transducers” was not particularly novel. It’s possible to encounter the term in early computer science literature — particularly in the context of Digital Signal Processing (DSP) and **data flow programming.** +软件找那个使用 “transducer” 一词和数据转换的可组合 pipeline 的通用概念至少可以追溯到 20 世纪 60 年代,但是我们对于他们应该如何工作的想法已经从一种语言和上下文转变为下一种语言。在计算机科学的早期,许多软件工程师也是电气工程师。当时对计算机科学的一般研究经常涉及到硬件和软件设计。因此,将计算过程视为 “transducer” 并不是特别新颖。在早期的计算机科学文献中可能会遇到这个术语 —— 特别是在数字信号处理(DSP)和**数据流编程**的背景下。 -In the 1960s, groundbreaking work was happening in graphical computing in MIT’s Lincoln Laboratory using the TX-2 computer system, a precursor to the US Air Force SAGE defense system. Ivan Sutherland’s famous [Sketchpad](https://dspace.mit.edu/handle/1721.1/14979), developed in 1961–1962, was an early example of object prototype delegation and graphical programming using a light pen. +在 20 世纪 60 年代,麻省理工学院林肯实验室的图形计算开始使用 TX-2 计算机系统,这是美国空军 SAGE 防御系统的前身。Ivan Sutherland 著名的 [Sketchpad](https://dspace.mit.edu/handle/1721.1/14979),于 1961 年至 1962 年开发,是使用光笔进行对象原型委派和图形编程的早期例子。 -Ivan’s brother, William Robert “Bert” Sutherland was one of several pioneers in data flow programming. He built a data flow programming environment on top of Sketchpad, which described software “procedures” as directed graphs of operator nodes with outputs linked to the inputs of other nodes. He wrote about the experience in his 1966 paper, [“The On-Line Graphical Specification of Computer Procedures”](https://dspace.mit.edu/handle/1721.1/13474). Instead of arrays and array processing, everything is represented as a stream of values in a continuously running, interactive program loop. Each value is processed by each node as it arrives at the parameter input. You can find similar systems today in [Unreal Engine’s Blueprints Visual Scripting Environment](https://docs.unrealengine.com/en-us/Engine/Blueprints) or [Native Instruments’ Reaktor](https://www.native-instruments.com/en/products/komplete/synths/reaktor-6/), a visual programming environment used by musicians to build custom audio synthesizers. +Ivan 的兄弟 William Robert “Bert” Sutherland 是数据流编程的几个先驱之一。他在 Sketchpad 上构建了一个数据流编程环境。它将软件“过程”描述为操作员节点的有向图,其输出连接到其他节点的输入。他在 1966 年的论文 [“The On-Line Graphical Specification of Computer Procedures”](https://dspace.mit.edu/handle/1721.1/13474) 中写下了这段经历。在连续运行的交互式程序循环中,所有内容都表示为值的流,而不是数组和处理中的数组。每个节点在到达参数输入时处理每个值。你现在可以在[虚拟蓝图引擎 Visual Scripting Environment](https://docs.unrealengine.com/en-us/Engine/Blueprints) 或 [Native Instruments’ Reaktor](https://www.native-instruments.com/en/products/komplete/synths/reaktor-6/) 找到类似的系统,这是一种音乐家用来构建自定义音频合成器的可视化编程环境。 -![](https://cdn-images-1.medium.com/max/800/1*nAe0WLXecnMGNalPclnFfw.png) +![ Bert Sutherland 撰写的运营商组成图](https://user-gold-cdn.xitu.io/2019/1/8/168291981b06d06c?w=800&h=707&f=png&s=63423) -Composed graph of operators from Bert Sutherland’s paper +Bert Sutherland 撰写的运营商组成图 -As far as I’m aware, the first book to popularize the term “transducer” in the context of general purpose software-based stream processing was the 1985 MIT text book for a computer science course called [“Structure and Interpretation of Computer Programs”](https://www.amazon.com/Structure-Interpretation-Computer-Programs-Engineering/dp/0262510871/ref=as_li_ss_tl?ie=UTF8&qid=1507159222&sr=8-1&keywords=sicp&linkCode=ll1&tag=eejs-20&linkId=44b40411506b45f32abf1b70b44574d2) (SICP) by Harold Abelson and Gerald Jay Sussman, with Julie Sussman. However, the use of the term “transducer” in the context of digital signal processing predates SICP. +据我所知,第一本在基于通用软件的流处理环境中推广 “transducer” 一词的书是 1985 年 MIT 计算机科学课程 [“Structure and Interpretation of Computer Programs”](https://www.amazon.com/Structure-Interpretation-Computer-Programs-Engineering/dp/0262510871/ref=as_li_ss_tl?ie=UTF8&qid=1507159222&sr=8-1&keywords=sicp&linkCode=ll1&tag=eejs-20&linkId=44b40411506b45f32abf1b70b44574d2) 的教科书(SICP)。该书由 Harold Abelson、Gerald Jay Sussman、Julie Sussman 和撰写。然而在数字信号处理中使用术语 “transducer” 早于 SICP。 -> **Note:** SICP is still an excellent introduction to computer science coming from a functional programming perspective. It remains my favorite book on the topic. +> **注**:从函数式编程的角度来看,SICP 仍然是对计算机科学出色的介绍。它仍然是这个主题中我最喜欢的书。 -More recently, transducers have been independently rediscovered and a _different protocol_ developed for Clojure by **Rich Hickey** (circa 2014), who is famous for carefully selecting words for concepts based on etymology. In this case, I’d say he nailed it, because Clojure transducers fill almost exactly the same niche as transducers in SICP, and they share many common characteristics. However, they are _not strictly the same thing._ +最近,transducer 已经重新被独立发掘。并且 **Rich Hickey**(大约 2014 年)为 Clojure 开发了一个**不同的协议**,他以精心选择基于词源的概念词而闻名。这时,我就会说他说的太棒了,因为 Clojure 的 transducer 的内在基本和 SICP 中的相同,并且他们也具有了很多共性。但是,他们**并非严格相同。** -Transducers as a general concept (not specifically Hickey’s protocol specification) have had considerable impact on important branches of computer science including data flow programming, signal processing for scientific and media applications, networking, artificial intelligence, etc. As we develop better tools and techniques to express transducers in our application code, they are beginning to help us make better sense of every kind of software composition, including user interface behaviors in web and mobile apps, and in the future, could also serve us well to help manage the complexity of augmented reality, autonomous devices and vehicles, etc. +Transducer 作为一般概念(不是 Hickey 的协议规范)来讲,对计算机科学的重要分支产生了相当大的影响,包括数据流编程、科学和媒体应用的信号处理、网络、人工智能等等。随着我们开发更好的工具和技术在我们打应用代码中阐释 transducer,它们开始帮助我们更好的理解各种软件组合,包括 Web 和 易用应用程序中的用户界面行为,并且在将来,还可以很好地帮助我们管理复杂的 AR(augmented reality),自主设备和车辆等。 -For the purpose of this discussion, when I say “transducer”, I’m not referring to SICP transducers, though it may sound like I’m describing them if you’re already familiar with transducers from SICP. I’m also not referring _specifically_ to Clojure’s transducers, or the transducer protocol that has become a de facto standard in JavaScript (supported by Ramda, Transducers-JS, RxJS, etc…). I’m referring to the _general concept of a higher-order reducer — _a transformation of a transformation. +为了讨论起见,当我说 “transducer” 时,我并不是指 SICP transducer,尽管如果你已经熟悉了 SICP transducer,可能听起来像是在讲述它们。我也没有**具体**提到 Clojure 的 transducer,或者已经成为 JavaScript 事实标准的 transducer 协议(由 Ramda、Transducer-JS、RxJS等支持...)。我指的是**高阶 reducer**的一般概念 —— 变幻的转换。 -In my view, the particular details of the transducer protocols matter a whole lot less than the general principles and underlying mathematical properties of transducers, however, if you want to use transducers in production, my current recommendation is to use an existing library which implements the transducers protocol for interoperability reasons. +在我看来,transducer 协议的特定细节比 transducer 的一般原理和基本数学特性重要的多,但是如果你想在生产中使用 transducer,为了满足互操作性,我目前的建议是使用现有的库来实现 transducer 协议。 -The transducers that I will describe here should be considered pseudo-code to express the concepts. They are _not compatible with the transducer protocol_, and _should not be used in production._ If you want to learn how to use a particular library’s transducers, refer to the library documentation. I’m writing them this way to lift up the hood and let you see how they work without forcing you to learn the protocol at the same time. +我将在这里描述的 transducer 应该是用伪代码来演示概念。它们**与 transducer 协议不兼容,不应该在生产中使用**。如果你想要学习如何使用特定库的 transducer,请参阅库文档。我这样写他们是为了引你入门,让你看看它们是如何工作的,而不是强迫你同时学习协议。 -When we’re done, you should have a better understanding of transducers in general, and how you might apply them in any context, with any library, in any language that supports closures and higher-order functions. +当我们完成后,你应该更好的理解 transducer,以及如何在任意的上下文中、与任意的库一起、在任何支持闭包和高阶函数的语言中使用它。 -### A Musical Analogy for Transducers +### Transducer 的音乐类比 -If you’re among the large number of software developers who are also musicians, a music analogy may be useful: You can think of transducers like signal processing gear (e.g., guitar distortion pedals, EQ, volume knobs, echo, reverb, and audio mixers). +如果你是众多既是音乐家又是软件的开发者的那群人中的一个,用音乐类比可能会很有用:你可以想到信号处理装置等传感器(如吉他失真踏板,均衡器,音量旋钮,回声,混响和音频混频器)。 -To record a song using musical instruments, we need some sort of physical transducer (i.e., a microphone) to convert the sound waves in the air into electricity on the wire. Then we need to route that wire to whatever signal processing units we’d like to use. For example, adding distortion to an electric guitar, or reverb to a voice track. Eventually this collection of different sounds must be aggregated together and mixed to form a single signal (or collection of channels) representing the final recording. +要使用乐器录制歌曲,我们需要某种物理传感器(即麦克风)来讲空气中的声波转换为电线上的电流。然后我们需要将该线路连接到我们想要使用的信号处理单元。例如,为电吉他加失真,或者对音轨进行混响。最终,这些不同声音的集合必须聚合在一起,混合来想成最终记录的单个信号(或者通道集合)。 -In other words, the signal flow might look something like this. Imagine the arrows are wires between transducers: +换句话说,信号流看起来可能是这样。把箭头想像成传感器之间的导线: ``` [ Source ] -> [ Mic ] -> [ Filter ] -> [ Mixer ] -> [ Recording ] ``` -In more general terms, you could express it like this: +更一般地说,你可以这么表达: ``` [ Enumerator ]->[ Transducer ]->[ Transducer ]->[ Accumulator ] ``` -If you’ve ever used music production software, this might remind you of a chain of audio effects. That’s a good intuition to have when you’re thinking about transducers, but they can be applied much more generally to numbers, objects, animation frames, 3d models, or anything else you can represent in software. +如果你曾经使用过音乐制作软件,这可能会让您想起一系列的音频效果。当你考虑 transducer 时,这是一个很好的直觉。但他们还可以更广泛的应用于数字、对象、动画帧、3D 模型或者任何你可以在软件中表示的其他内容。 -![](https://cdn-images-1.medium.com/max/1000/1*UBYaMsshNvLIn4mIHIlw-g.png) +![](https://user-gold-cdn.xitu.io/2019/1/8/168291981ed78a6b?w=1000&h=101&f=png&s=47421) -Screenshot: Renoise audio effects channel +屏幕截图:Renoise 音频效果通道。 -You may be experienced with something that behaves a little bit like a transducer if you’ve ever used the map method on arrays. For example, to double a series of numbers: +如果你曾在数组上使用 map 方法,你可能会对某些行为有点像 transducer 的东西熟悉。例如,要将一系列数字加倍: -``` +```js const double = x => x * 2; const arr = [1, 2, 3]; const result = arr.map(double); ``` -In this example, the array is an enumerable object. The map method enumerates over the original array, and passes its elements through the processing stage, `double`, which multiplies each element by 2, then accumulates the results into a new array. +在这个示例中,数组是可枚举对象。map 方法枚举原始数组,并将其元素传递给处理阶段 `double`,它将每个元素乘以 2,然后将结果累积到一个新数组中。 -You can even compose effects like this: +你甚至可以像这样构成效果: -``` +```js const double = x => x * 2; const isEven = x => x % 2 === 0; @@ -207,15 +204,15 @@ console.log(result); // [4, 8, 12] ``` -But what if you want to filter and double a potentially infinite stream of numbers, such as a drone’s telemetry data? +但是,如果你想过滤和加倍的可能是无限数字流,比如无人机的遥测数据呢? -Arrays can’t be infinite, and each stage in the array processing requires you to process the entire array before a single value can flow through the next stage in the pipeline. That same limitation means that composition using array methods will have degraded performance because a new array will need to be created and a new collection iterated over for each stage in the composition. +数组不能是无限的,并且数组处理过程中的每个阶段都要求你在单个值可以流经 pipeline 的下一个阶段之前处理整个数组。同样的问题意味着使用数组方法的合成会降低性能,因为需要创建一个新数组,并且合成中的每个阶段迭代一个新的集合。 -Imagine you have two sections of tubing, each of which represents a transformation to be applied to the data stream, and a string representing the stream. The first transformation represents the `isEven` filter, and the next represents the `double` map. In order to produce a single fully transformed value from an array, you'd have to run the entire string through the first tube first, resulting in a completely new, filtered array _before_ you can process even a single value through the `double` tube. When you finally do get to `double` your first value, you have to wait for the entire array to be doubled before you can read a single result. +想象一下,你有两段管道,每段都代表一个应用于数据流的转换,以及一个表示流的字符串。第一个转换表示 `isEven` 过滤器,下一个转换表示 `double` 映射。为了从数组中生成单个完全变换的值,你必须首先通过第一个管道运行整个字符串,从而产生一个全新的过滤数组,**然后**才能通过 `double` 管处理单个值。当你最终将第一个值 `double`,必须等待整个数组加倍才能读取单个结果。 -So, the code above is equivalent to this: +所以,上面的代码相当于: -``` +```js const double = x => x * 2; const isEven = x => x % 2 === 0; @@ -227,53 +224,52 @@ const result = tempResult.map(double); console.log(result); // [4, 8, 12] ``` +另一种方法是将值直接从过滤后的输出流式传输到映射转换,而无需在其间创建和迭代临时数组。将值一次一个地流过,无需在转换过程中对每个阶段迭代相同的集合,并且 transducer 可以随时发出停止信号,这意味着你不需要在集合中更深入地计算每个阶段。需要产生所需的值。 -The alternative is to flow a value directly from the filtered output to the mapping transformation without creating and iterating over a new, temporary array in between. Flowing the values through one at a time removes the need to iterate over the same collection for each stage in the transducing process, and transducers can signal a stop at any time, meaning you don’t need to enumerate each stage deeper over the collection than required to produce the desired values. - -There are two ways to do that: +有两种方法可以做到这一点: -* Pull: lazy evaluation, or -* Push: eager evaluation +* Pull:惰性求值,或者 +* Push:及早求值 -A pull API waits until a consumer asks for the next value. A good example in JavaScript is an `Iterable`, such as the object produced by a generator function. Nothing happens in the generator function until you ask for the next value by calling `.next()`on the iterator object it returns. +Pull API 等待 consumer 请求下一个值。JavaScript 中一个很好的例子是 `Iterable`。例如生成器函数生成的对象。在通过它在返回的迭代器对象上调用 `.next()` 来请求下一个值之前,生成器函数什么事情都不做。 -A push API enumerates over the source values and pushes them through the tubes as fast as it can. A call to `array.reduce()` is a good example of a push API. `array.reduce()` takes one value at a time from the array and pushes it through the reducer, resulting in a new value at the other end. For eager processes like array reduce, the process is immediately repeated for each element in the array until the entire array has been processed, blocking further program execution in the meantime. +Push API 枚举源值并尽可能快地将它们推送到管中。对于 `array.reduce()` 调用是 push API 的一个很好的例子。`array.reduce()` 从数组中一次获取一个值并将其推送到 reducer,从而在另一端产生一个新值。对于像 array reduce 这样的热切进程,会立即对数组中的每个元素重复该过程,直到处理完整个数组。在此期间,阻止进一步的程序执行。 -Transducers don’t care whether you pull or push. Transducers have no awareness of the data structure they’re acting on. They simply call the reducer you pass into them to accumulate new values. +Transducers 不关心你是 pull 还是 push。Transducers 不了解他们所采取的数据结构。他们只需调用你传递给它们的 reducer 来积累新值。 -Transducers are higher order reducers: Reducer functions that take a reducer and return a new reducer. Rich Hickey describes transducers as process transformations, meaning that as opposed to simply changing the values flowing through transducers, transducers change the processes that act on those values. +Transducers 是高阶 reducer: Reducer 函数采用 reducer 返回新的 reducer。Rich Hickey 将 transducer 描述为过程变换,这意味着 transducer 没有简单地改变流经的值,而是改变了作用这些值的过程。 -The signatures look like this: +签名应该是这样的: -``` +```js reducer = (accumulator, current) => accumulator transducer = reducer => reducer ``` -Or, to spell it out: +或者,拼出来: -``` +```js transducer = ((accumulator, current) => accumulator) => ((accumulator, current) => accumulator) ``` -Generally speaking though, most transducers will need to be partially applied to some arguments to specialize them. For example, a map transducer might look like this: +一般来说,大多数 transducer 需要部分应用于某些参数来专门化它们。例如,map transducer 可能如下所示: -``` +```js map = transform => reducer => reducer ``` -Or more specifically: +或者更具体地说: -``` +```js map = (a => b) => step => reducer ``` -In other words, a map transducer takes a mapping function (called a transform) and a reducer (called the `step` function), and returns a new reducer. The `step` function is a reducer to call when we've produced a new value to add to the accumulator in the next step. +换句话说,map transducer 采用映射函数(称为变换)和 reducer(称为 `step` 函数 ),返回新的 reducer。`Step` 函数是一个 reducer,当我们生成一个新值以下一步中添加到累加器时调用。 -Let’s look at some naive examples: +让我们看一些不成熟的例子: -``` +```js const compose = (...fns) => x => fns.reduceRight((y, f) => f(y), x); const map = f => step => @@ -298,17 +294,16 @@ const result = [1,2,3,4,5,6].reduce(xform, []); // [4, 8, 12] console.log(result); ``` +这包含了很多内容。让我们分解一下。`map` 将函数应用于某些上下文的值。在这种情况下,上下文是 transducer pipeline。看起来大致如下: -That’s a lot to absorb. Let’s break it down. `map` applies a function to the values inside some context. In this case, the context is the transducer pipeline. It looks roughly like this: - -``` +```js const map = f => step => (a, c) => step(a, f(c)); ``` -You can use it like this: +你可以像这样使用它: -``` +```js const double = x => x * 2; const doubleMap = map(double); @@ -318,20 +313,20 @@ const step = (a, c) => console.log(c); doubleMap(step)(0, 4); // 8doubleMap(step)(0, 21); // 42 ``` -The zeros in the function calls at the end represent the initial values for the reducers. Note that the step function is supposed to be a reducer, but for demonstration purposes, we can hijack it and log to the console. You can use the same trick in your unit tests if you need to make assertions about how the step function gets used. +函数调用末尾的零表示 reducer 的初始值。请注意,step 函数应该是 reducer,但出于演示目的,我们可以劫持它并打开控制台。如果需要对 step 函数的使用方式进行断言,则可以在单元测试中使用相同的技巧。 -Transducers get interesting when we compose them together. Let’s implement a simplified filter transducer: +当我们将它们组合在一起的时候,transducer 将会变得很有意思。让我们实现一个简化的 filter transducer: -``` +```js const filter = predicate => step => (a, c) => predicate(c) ? step(a, c) : a; ``` -Filter takes a predicate function and only passes through the values that match the predicate. Otherwise, the returned reducer returns the accumulator, unchanged. +Filter 采用 predicate 函数,只传递与 predicate 匹配的值。否则,返回的 reducer 返回累加器,不变。 -Since both of these functions take a reducer and return a reducer, we can compose them with simple function composition: +由于这两个函数都使用 reducer 并且返回了 reducer,因此我们可以使用简单的函数组合来组合它们: -``` +```js const compose = (...fns) => x => fns.reduceRight((y, f) => f(y), x); const isEven = n => n % 2 === 0; @@ -342,24 +337,22 @@ const doubleEvens = compose( map(double) ); ``` +这也将返回一个 transducer,需要我们必须提供最后一个 step 函数,以告诉 transducer 如何累积结果: -This will also return a transducer, which means we must supply a final step function in order to tell the transducer how to accumulate the result: - -``` +```js const arrayConcat = (a, c) => a.concat([c]); const xform = doubleEvens(arrayConcat); ``` +此调用结果是标准的 reducer,我们可以直接传递给任何兼容的 reduce API。第二个参数表示 reduction 的初始值。这种情况下是一个空数组: -The result of this call is a standard reducer that we can pass directly to any compatible reduce API. The second argument represents the initial value of the reduction. In this case, an empty array: - -``` +```js const result = [1,2,3,4,5,6].reduce(xform, []); // [4, 8, 12] ``` -If this seems like a lot of work, keep in mind there are already functional programming libraries that supply common transducers along with utilities such as `compose`, which handles function composition, and `into`, which transduces a value into the given empty value, e.g.: +如果这看起来像是做了很多,请记住,已经有函数编程库提供常见的 transducer 以及诸如 `compose` 工具程序,他们处理函数组合,并将值转换为给定的空值。例如: -``` +```js const xform = compose( map(inc), filter(isEven) @@ -368,51 +361,51 @@ const xform = compose( into([], xform, [1, 2, 3, 4]); // [2, 4] ``` -With most of the required tools already in the tool belt, programming with transducers is really intuitive. +由于工具带中已经有了大多数所需的工具,因此使用 transducer 进行编程非常直观。 -Some popular libraries which support transducers include Ramda, RxJS, and Mori. +一些支持 transducer 的流行库包括 Ramda、RxJS 和 Mori。 -### Transducers Compose Top-to-Bottom +### 由上至下组合 transducers -Transducers under standard function composition (`f(g(x))`) apply top to bottom/left-to-right rather than bottom-to-top/right-to-left. In other words, using normal function composition, `compose(f, g)` means "compose `f` _after_ `g`". Transducers wrap around other transducers under composition. In other words, a transducer says "I'm going to do my thing, and _then_ call the next transducer in the pipeline", which has the effect of turning the execution stack inside out. +标准函数组成下的 transducer 从上到下/从左到右而非从下到上/从右到左应用。也就是说,使用正常函数组合,`compose(f, g)` 表示“在 `g` **之后**复合 `f`”。Transducer 在组成下纠缠其他 transducer。换言之,transducer 说“我要做我的事情,**然后**调用管道中下一个 transducer”,这会将执行堆栈内部转出。 -Imagine you have a stack of papers, the top labeled, `f`, the next, `g`, and the next `h`. For each sheet, take the sheet off the top of the stack and place it onto the top of a new adjacent stack. When you're done, you'll have a stack whose sheets are labeled `h`, then `g`, then `f`. +想象一下,你有一沓纸,顶部的一个标有 `f`,下一个是 `g`,再下面是 `h`。对于每张纸,将纸张从纸沓的顶部取出,然后将其放到相邻的新的一沓纸的顶部。当你这样做之后,你将获得一个栈,其内容标记为 `h`,然后是 `g`,然后是 `f`。 -### Transducer Rules +### Transducer 规则 -The examples above are naive because they ignore the rules that transducers must follow for interoperability. +上面的例子不太成熟,因为他们忽略了 transducer 必须遵循的互操作性(interoperability)规则 -As with most things in software, transducers and transducing processes need to obey some rules: +和软件中的大部分内容一样,transducer 和转换过程需要遵循一些规则: -1. Initialization: Given no initial accumulator value, a transducer must call the step function to produce a valid initial value to act on. The value should represent the empty state. For example, an accumulator that accumulates an array should supply an empty array when its step function is called with no arguments. +1. 初始化:如果没有初始的累加器值,transducer 必须调用 step 函数来产生有效的初始值进行操作。该值应该表示空状态。例如,累积数组的累加器应该在没有参数的情况下调用其 step 函数时提供空数组。 -2. Early termination: A process that uses transducers must check for and stop when it receives a reduced accumulator value. Additionally, a transducer step function that uses a nested reduce must check for and convey reduced values when they are encountered. +2. 提前终止:使用 transducer 的进程必须在收到 reduce 过的累加器值时检查并停止。此外,对于嵌套 reduce 的 transducer,使用其 step 函数时必须在遇到时检查并传递 reduce 过的值。 -3. Completion (optional): Some transducing processes never complete, but those that do should call the completion function to produce a final value and/or flush state, and stateful transducers should supply a completion operation that cleans up any accumulated resources and potentially produces one final value. +3. 完成(可选):某些转换过程永远不会完成,但那些转换过程应调用完成函数(completion function)来产生最终值/或刷新(flush)状态,并且状态 transducer 应提供完成的操作以清除任何积累的资源和可能产生最终的资源值。 -### Initialization +### 初始化 -Let’s go back to the `map` operation and make sure that it obeys the initialization (empty) law. Of course, we don't need to do anything special, just pass the request down the pipeline using the step function to create a default value: +让我们回到 `map` 操作并确保它遵守初始化(空)法则。当然,我们不需要做任何特殊的事情,只需要使用 step 函数在 pipeline 中传递请求来创建默认值: -``` +```js const map = f => step => (a = step(), c) => ( step(a, f(c)) ); ``` -The part we care about is `a = step()` in the function signature. If there is no value for `a` (the accumulator), we'll create one by asking the next reducer in the chain to produce it. Eventually, it will reach the end of the pipeline and (hopefully) create a valid initial value for us. +我们关心的部分是函数签名中的 `a = step()`。如果 `a`(累加器)没有值,我们将通过链中的下一个 reducer 来生成它。最终,它将到达 pipeline 的末端,并(但愿)为我们创建有效的初始值。 -Remember this rule: When called with no arguments, a reducer should always return a valid initial (empty) value for the reduction. It’s generally a good idea to obey this rule for any reducer function, including reducers for React or Redux. +记住这条规则:当没有参数调用时,reducer 的操作应该总是为 reducer 返回一个有效的初始(空)值。对于任何 reducer 函数,包括 React 或 Redux 的 Reducer,遵守此规则通常是个好主意。 -### Early Termination +### 提前终止 -It’s possible to signal to other transducers in the pipeline that we’re done reducing, and they should not expect to process any more values. Upon seeing a `reduced` value, other transducers may decide to stop adding to the collection, and the transducing process (as controlled by the final `step()` function) may decide to stop enumerating over values. The transducing process may make one more call as a result of receiving a `reduced` value: The completion call mentioned above. We can signal that intention with a special reduced accumulator value. +可以向 pipeline 中的其他 transducer 发出信号,表明我们已经完成了 reduce,并且他们不应该期望再处理任何值。在看到 `reduced` 值时,其他 transducer 可以决定停止添加到集合,并且转换过程(由最终 `step()` 函数控制)可以决定停止枚举值。由于接收到 `reduced` 值,转换过程可以再调用一次:完成上述调用。我们可以通过特殊的 reduce 过的累加器来表示这个意图。 -What is a reduced value? It could be as simple as wrapping the accumulator value in a special type called `reduced`. Think of it like wrapping a package in a box and labelling the box with messages like "Express" or "Fragile". Metadata wrappers like this are common in computing. For example: http messages are wrapped in containers called "request" or "response", and those container types have headers that supply information like status codes, expected message length, authorization parameters, etc... +什么是 reduced 值?它可能像将累加器值包装在一个名为 `reduced` 的特殊类型中一样简单。可以把它想象包装盒子并用 "Express" 或 "Fragile" 这样的消息标记盒子。像这样的元数据包装器(metadata wrapper)在计算中很常见。例如:http 消息包含在名为 “request” 或 “response” 的容器中,这些容器类型提供了状态码、预期消息长度、授权参数等信息的表头... -Basically, it’s a way of sending multiple messages where only a single value is expected. A minimal (non-standard) example of a `reduced()` type lift might look like this: +基本上,它是一种发送多条信息的方式,其中只需要一个值。`reduced()` 类型提升的最小(非标准)示例可能如下所示: -``` +```js const reduced = v => ({ get isReduced () { return true; @@ -422,30 +415,30 @@ const reduced = v => ({ }); ``` -The only parts that are strictly required are: +唯一严格要求的部分是: -* The type lift: A way to get the value inside the type (e.g., the `reduced` function, in this case) -* Type identification: A way to test the value to see if it is a value of `reduced` (e.g., the `isReduced` getter) -* Value extraction: A way to get the value back out of the type (e.g., `valueOf()`) +* 类型提升:获取类型内部值的方法(例如,这种情况下的 `reduced` 函数) +* 类型识别:一种测试值以查看它是否为 `reduced` 值的方法(例如,`isReduced` getter) +* 值提取:一种从值中取出值的方法(例如,`valueOf()`) -`toString()` is included here strictly for debugging convenience. It lets you introspect both the type and the value at the same time in the console. +此处包含 `toString()` 以便于调试。它允许您在 console 中同时内省类型和值。 -### Completion +### 完成 -> “In the completion step, a transducer with reduction state should flush state prior to calling the nested transformer’s completion function, unless it has previously seen a reduced value from the nested step in which case pending state should be discarded.” ~ Clojure transducers documentation +> “在完成步骤中,具有刷新状态(flush state)的 transducer 应该在调用嵌套 transducer 的完成函数之前刷新状态,除非之前已经看到嵌套步骤中的 reduced 值,在这种情况下应该丢弃 pending 状态。” ~ Clojure transducer 文档 -In other words, if you have more state to flush after the previous function has signaled that it’s finished reducing, the completion step is the time to handle it. At this stage, you can optionally: +换句话说,如果在前一个函数表示已完成 reduce 后,有更多状态需要刷新,则完成函数是处理它的时间。在此阶段,你可以选择: -* Send one more value (flush your pending state) -* Discard your pending state -* Perform any required state cleanup +* 再发送一个值(刷新待处理状态) +* 丢弃 pending 状态 +* 执行任何所需的状态清理 ### Transducing -It’s possible to transduce over lots of different types of data, but the process can be generalized: +可以转换大量不同类型的数据,但是这个过程可以推广: -``` -// import a standard curry, or use this magic spell: +```js +// 导入标准 curry,或者使用这个魔术: const curry = ( f, arr = [] ) => (...args) => ( @@ -459,52 +452,52 @@ const transduce = curry((step, initial, xform, foldable) => ); ``` -The `transduce()` function takes a step function (the final step in the transducer pipeline), an initial value for the accumulator, a transducer, and a foldable. A foldable is any object that supplies a `.reduce()` method. +`transduce()` 函数采用 step 函数(transducer pipeline 的最后一步),累加器的初始值,transducer 并且可折叠。可折叠是提供 `.reduce()` 方法的任何对象。 -With `transduce()` defined, we can easily create a function that transduces to an array. First, we need a reducer that reduces to an array: +通过定义 `transduce()`,我们可以轻松创建一个转换为数组的函数。首先,我们需要一个 reduce 数组的 reducer: -``` +```js const concatArray = (a, c) => a.concat([c]); ``` -Now we can use the curried `transduce()` to create a partial application that transduces to arrays: +现在我们可以使用柯里化过的 `transduce()` 创建一个转换为数组的部分应用程序: -``` +```js const toArray = transduce(concatArray, []); ``` -With `toArray()` we can replace two lines of code with one, and reuse it in a lot of other situations, besides: +使用 `toArray()` 我们可以用一行替代两行代码,并在很多其他情况下复用它,除此之外: -``` -// Manual transduce: +```js +// 手动 transduce: const xform = doubleEvens(arrayConcat); const result = [1,2,3,4,5,6].reduce(xform, []); // => [4, 8, 12] -// Automatic transduce: +// 自动 transduce: const result2 = toArray(doubleEvens, [1,2,3,4,5,6]); console.log(result2); // [4, 8, 12] ``` -### The Transducer Protocol +### Transducer 协议 -Up to this point, I’ve been hiding some details behind a curtain, but it’s time to take a look at them now. Transducers are not really a single function. They’re made from 3 different functions. Clojure switches between them using pattern matching on the function’s arity. +到目前为止,我们一直在隐藏幕后一些细节,但现在是时候看看它们了。Transducer 并非真正的单一函数。他们由 3 种不同的函数组成。Clojure 使用函数的 arity 上的模式匹配并在它们之间切换。 -In computer science, the arity of a function is the number of arguments a function takes. In the case of transducers, there are two arguments to the reducer function, the accumulator and the current value. In Clojure, Both are _optional_, and the behavior changes based on whether or not the arguments get passed. If a parameter is not passed, the type of that parameter inside the function is `undefined`. +在计算机科学中,函数的 arity 是函数所采用参数的数量。在 transducer 的情况下,reducer 函数有两个参数,累加器和当前值。在 Clojure 中,两者都是**可选的**,并且函数的行为会根据参数是否通过而更改。如果没有传递参数,则函数中该参数的类型是 `undefined`。 -The JavaScript transducer protocol handles things a little differently. Instead of using function arity, JavaScript transducers are a function that take a transducer and return a transducer. The transducer is an object with three methods: +JavaScript transducer 协议处理的方式略有不同。JavaScript transducer 不是使用函数 arity,而是采用 transducer 并返回 transducer 的函数。Transducer 是一个有三种方法的对象: -* `init` Return a valid initial value for the accumulator (usually, just call the next `step()`). -* `step` Apply the transform, e.g., for `map(f)`: `step(accumulator, f(current))`. -* `result` If a transducer is called without a new value, it should handle its completion step (usually `step(a)`, unless the transducer is stateful). +* `init` 返回累加器的有效初始值(通常,只需要调用下一步 `step()`)。 +* `step` 应用变换,例如,对于 `map(f)`:`step(accumulator, f(current))`。 +* `result` 如果在没有新值的情况下调用 transducer,它应该处理其完成步骤(通常是 `step(a)`,除非 transducer 是有状态的)。 -> **Note:** The transducer protocol in JavaScript uses `@@transducer/init`, `_@@transducer/step_`_, and_ `_@@transducer/result_`_, respectively._ +> **注意:** JavaScript 中的 transducer 协议分别使用 `@@transducer/init`、`@@transducer/step` 和 `@@transducer/result`。 -Some libraries provide a `transducer()` utility that will automatically wrap your transducer for you. +有些库提供一个 `tranducer()` 工具程序,可以自动为你包装 transducer。 -Here is a less naive implementation of the map transducer: +这是一个不那么不成熟的 transducer 实现: -``` +```js const map = f => next => transducer({ init: () => next.init(), result: a => next.result(a), @@ -512,24 +505,24 @@ const map = f => next => transducer({ }); ``` -By default, most transducers should pass the `init()` call to the next transducer in the pipeline, because we don't know the transport data type, so we can't produce a valid initial value for it. +默认情况下,大多数 transducer 应该将 `init()` 调用传递给 pipeline 中的下一个 transducer,因为我们不知道传输数据类型,因此我们无法为它生成有效的初始值。 -Additionally, the special `reduced` object uses these properties (also namespaced `@@transducer/` in the transducer protocol: +此外,特殊的 `reduced` 对象使用这些属性(在 transducer 协议中也命名为 `@@transducer/`): -* `reduced` A boolean value that is always `true` for reduced values. -* `value` The reduced value. +* `reduced` 一个布尔值,对于 reduced 的值,该值始终为 `true`。 +* `value` reduced 的值。 -### Conclusion +### 结论 -**Transducers** are composable higher order reducers which can reduce over any underlying data type. +**Transducers** 是可组合的高阶 reducer,可以 reduce 任何基础数据类型。 -Transducers produce code that can be orders of magnitude more efficient than dot chaining with arrays, and handle potentially infinite data sets without creating intermediate aggregations. +Transducers 产生的代码比使用数组进行点链接的效率高几个数量级,并且可以处理潜在的无需数据集而无需创建中间聚合。 -> **Note:** Transducers aren’t always faster than built-in array methods. The performance benefits tend to kick in when the data set is very large (hundreds of thousands of items), or pipelines are quite large (adding significantly to the number of iterations required using method chains). If you’re after the performance benefits, remember to profile. +> **注意**:Transducers 并不是总是比内置数组方法更快。当数据集非常大(数十万个项目)或 pipeline 非常大(显著增加使用方法链所需的迭代次数)时,性能优势往往会有所提升。如果你追求性能优势,请记住简介。 -Take another look at the example from the introduction. You should be able to build `filter()`, `map()`, and `toArray()` using the example code as a reference and make this code work: +再看看介绍中的例子。你应该能使用示例代码作为参考构建 `filter()`、`map()` 和 `toArray()`,并使此代码工作: -``` +```js const friends = [ { id: 1, name: 'Sting', nearMe: true }, { id: 2, name: 'Radiohead', nearMe: true }, @@ -550,13 +543,13 @@ const getFriendsNearMe = compose( const results2 = toArray(getFriendsNearMe, friends); ``` -In production, you can use transducers from [Ramda](http://ramdajs.com/), [RxJS](https://github.com/ReactiveX/rxjs), [transducers-js](https://github.com/cognitect-labs/transducers-js), or [Mori](https://github.com/swannodette/mori). +在生产中,你可以使用 [Ramda](http://ramdajs.com/)、[RxJS](https://github.com/ReactiveX/rxjs)、[transducers-js](https://github.com/cognitect-labs/transducers-js) 或者 [Mori](https://github.com/swannodette/mori)。 -All of those work a little differently than the example code here, but follow all the same fundamental principles. +所有上面的这些都与这里的示例代码略有不同,但遵循所有相同的基本原则。 -Here’s an example from Ramda: +一下是 Ramda 的一个例子: -``` +```js import { compose, filter, @@ -583,19 +576,21 @@ const result = into([], doubleEvens, arr); console.log(result); // [4, 8, 12] ``` -Whenever I need to combine a number of operations, such as `map`, `filter`, `chunk`, `take`, and so on, I reach for transducers to optimize the process and keep the code readable and clean. Give them a try. +每当我们需要组个一些操作时,例如 `map`、`filter`、`chunk`、`take` 等,我会深入 transducer 以优化处理过程并保持代码的可读性和清爽。来试试吧。 + +### 在 EricElliottJS.com 上可以了解到更多 -### Learn More at EricElliottJS.com +视频课程和函数式编程已经为 EricElliottJS.com 的网站成员准备好了。如果你还不是当中的一员,[现在就注册吧](https://ericelliottjs.com/)。 -Video lessons on functional programming are available for members of EricElliottJS.com. If you’re not a member, [sign up today](https://ericelliottjs.com/). +[![](https://user-gold-cdn.xitu.io/2019/1/8/16829198165fd961?w=800&h=257&f=jpeg&s=27661)](https://ericelliottjs.com/product/lifetime-access-pass/) * * * -**_Eric Elliott_ is the author of [“Programming JavaScript Applications”](http://pjabook.com) (O’Reilly), and cofounder of the software mentorship platform, [DevAnywhere.io](https://devanywhere.io/). He has contributed to software experiences for _Adobe Systems, Zumba Fitness, The Wall Street Journal, ESPN, BBC_, and top recording artists including _Usher, Frank Ocean, Metallica_, and many more.** +**_Eric Elliott_ 是 [“编写 JavaScript 应用”](http://pjabook.com)(O’Reilly)以及[“跟着 Eric Elliott 学 Javascript”](http://ericelliottjs.com/product/lifetime-access-pass/) 两书的作者。他为许多公司和组织作过贡献,例如 *Adobe Systems*、*Zumba Fitness*、*The Wall Street Journal*、*ESPN* 和 *BBC* 等,也是很多机构的顶级艺术家,包括但不限于 *Usher*、*Frank Ocean* 以及 *Metallica*。** -_He works remote from anywhere with the most beautiful woman in the world._ +大多数时间,他都在 San Francisco Bay Area,同这世上最美丽的女子在一起。 -Thanks to [JS_Cheerleader](https://medium.com/@JS_Cheerleader?source=post_page). +感谢 [JS_Cheerleader](https://medium.com/@JS_Cheerleader?source=post_page)。 > 如果发现译文存在错误或其他需要改进的地方,欢迎到 [掘金翻译计划](https://github.com/xitu/gold-miner) 对译文进行修改并 PR,也可获得相应奖励积分。文章开头的 **本文永久链接** 即为本文在 GitHub 上的 MarkDown 链接。 From a97ed1f343dd3c7b45c7d8ec1620d86b35c0142d Mon Sep 17 00:00:00 2001 From: LeviDing Date: Tue, 8 Jan 2019 21:56:53 +0800 Subject: [PATCH 33/54] Create top-javascript-frameworks-and-topics-to-learn-in-2019.md --- ...-frameworks-and-topics-to-learn-in-2019.md | 195 ++++++++++++++++++ 1 file changed, 195 insertions(+) create mode 100644 TODO1/top-javascript-frameworks-and-topics-to-learn-in-2019.md diff --git a/TODO1/top-javascript-frameworks-and-topics-to-learn-in-2019.md b/TODO1/top-javascript-frameworks-and-topics-to-learn-in-2019.md new file mode 100644 index 00000000000..3b226e436f7 --- /dev/null +++ b/TODO1/top-javascript-frameworks-and-topics-to-learn-in-2019.md @@ -0,0 +1,195 @@ +> * 原文地址:[Top JavaScript Frameworks and Topics to Learn in 2019](https://medium.com/javascript-scene/top-javascript-frameworks-and-topics-to-learn-in-2019-b4142f38df20) +> * 原文作者:[Eric Elliott](https://medium.com/@_ericelliott) +> * 译文出自:[掘金翻译计划](https://github.com/xitu/gold-miner) +> * 本文永久链接:[https://github.com/xitu/gold-miner/blob/master/TODO1/top-javascript-frameworks-and-topics-to-learn-in-2019.md](https://github.com/xitu/gold-miner/blob/master/TODO1/top-javascript-frameworks-and-topics-to-learn-in-2019.md) +> * 译者: +> * 校对者: + +# Top JavaScript Frameworks and Topics to Learn in 2019 + +![](https://cdn-images-1.medium.com/max/2560/1*RFPEzZmTByjDmScp1sY8Jw.png) + +Image: Jon Glittenberg Happy New Year 2019 (CC BY 2.0) + +It’s that time of year again: The annual overview of the JavaScript tech ecosystem. Our aim is to highlight the learning topics and technologies with the highest potential job ROI. What are people using in the workforce? What do the trends look like? We’re not attempting to pick what’s best, but instead using a data-driven approach to help you focus on what might actually help you land a job when the interviewer asks you, “do you know __(fill in the blank)__?” + +We’re not going to look at which ones are the fastest, or which ones have the best code quality. We’ll assume they’re all speed demons and they’re all good enough to get the job done. The focus is on one thing: What’s actually being used at scale? + +### Component Frameworks + +The big question we’ll look at is the current state of component frameworks, and we’re going to focus primarily on the big three: React, Angular, and Vue.js, primarily because they have all broken far ahead of the rest of the pack in terms of workplace adoption. + +Last year I noted how fast Vue.js was growing and mentioned it might catch Angular in 2018. That didn’t happen, but it’s still growing very quickly. I also predicted it would have a much harder time converting React users because React has a much stronger user satisfaction rate than Angular — React users simply don’t have a compelling reason to switch. That played out as expected in 2018. React kept a firm grip on its lead in 2018. + +Interestingly, all three frameworks are still growing exponentially, year over year. + +#### Prediction: React Continues to Dominate in 2019 + +React still has [much higher satisfaction ratings than Angular](https://2018.stateofjs.com/front-end-frameworks/overview/) for the third year we’ve been tracking it, and it’s not giving up any ground to challengers. I don’t currently see anything that could challenge it in 2019. Unless something crazy big comes along and disrupts it, React will be the framework to beat again at the end of 2019. + +Speaking of React, it just keeps getting better. The new [React hooks API](https://reactjs.org/docs/hooks-reference.html) replaced the `class` API I’ve been merely tolerating since React 0.14. (The `class` API still works, but the hooks API is really _much better_). React’s great API improvements, like better support for code splitting and concurrent rendering (see [details](https://reactjs.org/blog/2018/11/13/react-conf-recap.html)), are going to make it really hard to beat in 2019. React is now without a doubt, the most developer friendly front-end framework in the space. I couldn’t recommend it more. + +#### Data Sources + +We’ll look at a handful of key pieces of data to gauge interest and use in the industry: + +1. **Google Search trends.** Not my favorite indicator, but good for a big picture view. +2. **Package Downloads.** The aim here is to catch real users in the act of using the framework. +3. **Job board postings from Indeed.com.** Using the same methodology from previous years for consistency. + +#### Google Search Trends + +![](https://cdn-images-1.medium.com/max/800/1*DPlan5kEE81FW0eUA3Y3oQ.png) + +Framework search trends: Jan 2014 — Dec 2018 + +React overtook Angular in the search trends in January 2018, and held its lead through the end of the year. Vue.js now holds a visible position on the graph, but still small factor in the search trends. For comparison: last year’s graph: + +![](https://cdn-images-1.medium.com/max/800/1*q0MyFu6pldf-guTIQweTSQ.png) + +Framework search trends: Jan 2014 — Dec 2017 + +#### Package Downloads + +Package downloads give us a fair indication of what’s actually being used, because developers frequently download the packages they need while they’re working. + +Overly-clever readers will note that sometimes they download these things from their internal corporate package repos, to which I answer, “why yes, that does happen — to all three frameworks.” All of them have established a foothold in the enterprise, and I’m confident in the averaging power of this data at scale. + +**React Monthly Downloads: 2014–2018** + +![](https://cdn-images-1.medium.com/max/800/1*IV9KdeP1hOwxSVZdwoKKcQ.png) + +**Angular Monthly Downloads: 2014–2018** + +![](https://cdn-images-1.medium.com/max/800/1*IxS8G-0oixLWL0F2NDIYng.png) + +**Vue Monthly Downloads: 2014–2018** + +![](https://cdn-images-1.medium.com/max/800/1*uvg4_D5NyuIiyUI_H72S2w.png) + +Let’s look at a quick visual comparison of the share of downloads: + +![](https://cdn-images-1.medium.com/max/800/1*THtgoY-LQTvIm8ezl3SGiQ.png) + +_“But you’re forgetting all about Angular 1.0! It’s still huge in the enterprise.”_ + +No, I’m not. Angular 1.0 is still used a lot in the enterprise in the same way that Windows XP is still used a lot in the enterprise. It’s definitely out there in enough numbers to notice, but the new versions have long since dwarfed it to the point that it’s now less significant than the other frameworks. + +Why? Because the software industry at large, and over-all use of JavaScript _across all sectors (including the enterprise)_ is growing so fast that new installs quickly dwarf old installs, even if the legacy apps _never upgrade._ + +For evidence, just take another look at those download charts. More downloads in 2018 than in the previous years _combined._ + +#### Job Board Postings + +Indeed.com aggregates job postings from a variety of job boards. Every year, _we tally the job postings¹_ mentioning each framework to give you a better idea of what people are hiring for. Here’s what it looks like this year: + +![](https://cdn-images-1.medium.com/max/800/1*GkJY82i3ryEZW1akwUSQoA.png) + +Dec 2018 Job Board Postings Per Framework + +* React: 24,640 +* Angular: 19,032 +* jQuery: 14,272 +* Vue: 2,816 +* Ember (not pictured): 2,397 + +Again, a lot more total jobs this year than the previous year. I dropped Ember because it’s clearly not growing at the rate that everything else is. I wouldn’t recommend learning it to prepare for a future job placement. jQuery and Ember jobs didn’t change much, but everything else grew a lot. + +Thankfully, the number of new people joining the software engineering field has grown a lot as well in 2018, but we need to continue to hire and train junior developers (meaning we need [qualified senior developers to mentor them](https://devanywhere.io)), or we won’t keep pace with the explosive job growth. For comparison, here’s last year’s chart: + +![](https://cdn-images-1.medium.com/max/800/1*zO-KgLZ5kDbv2sug6js9ug.png) + +Average salary climbed again in 2018, from $110k/year to $111k/year. Anecdotally, the salary listings are lagging new hire expectations, and hiring managers will struggle to hire and retain developers if they don’t adjust for the developer’s market and offer larger pay increases. Retention and poaching continues to be a huge problem in 2018 as employees jump ship for higher paying jobs, elsewhere. + +1. **_Methodology:_** _Job searches were conducted on Indeed.com. To weed out false positives, I paired searches with the keyword “software” to strengthen the chance of relevance, and then multiplied by ~1.5 (roughly the difference between programming job listings that use the word “software” and those that don’t.) All SERPS were sorted by date and spot checked for relevance. The resulting figures aren’t 100% accurate, but they’re good enough for the relative approximations used in this article._ + +### JavaScript Fundamentals + +I say it every year: Focus on the fundamentals. This year you’re getting some extra help. All software development is composition: The act of breaking down complex problems into smaller problems, and composing solutions to those smaller problems to form your application. + +But when I ask JavaScript interviewees the most fundamental questions in software engineering, “what is function composition?” and “what is object composition?” they almost invariably can’t answer the questions, even though they do them every day. + +I have long thought this was a very serious problem that needs to be addressed, so I wrote a book on the topic: [**“Composing Software”**](https://leanpub.com/composingsoftware). + +> If you learn nothing else in 2019, learn how to compose software well. + +#### On TypeScript + +TypeScript continued to grow in 2018, and it continues to be overrated because [type safety does not appear to be a real thing](https://medium.com/javascript-scene/the-shocking-secret-about-static-types-514d39bf30a3) (does not appear to reduce production bug density by much), and [type inference](https://medium.com/javascript-scene/you-might-not-need-typescript-or-static-types-aa7cb670a77b) in JavaScript without TypeScript’s help is really quite good. You can even use the TypeScript engine to get type inference in normal JavaScript using Visual Studio Code. Or install the Tern.js plugins for your favorite editor. + +TypeScript continues to fall flat on its face for most higher order functions. Maybe I just don’t know how to use it correctly (after years living with it on a regular basis — in which case, they really need to improve usability, documentation, or both), but I still don’t know how to properly type the map operation in TypeScript, and it seems to be oblivious to anything going on in a [transducer](https://medium.com/javascript-scene/transducers-efficient-data-processing-pipelines-in-javascript-7985330fe73d). It fails to catch errors, and frequently complains about errors that aren’t really errors at all. + +It just isn’t flexible or full featured enough to support how I think about software. But I’m still holding out hope that one day it will add the features we need, because as much as its shortcomings frustrate me while trying to use it for real projects, I also love the potential of being able to properly (and selectively) type things when it’s really useful. + +My current rating: Very cool in very select, restricted use-cases, but overrated, clumsy, and very low ROI for large production apps. Which is ironic, because TypeScript bills itself as “JavaScript that scales”. Perhaps they should add a word: “JavaScript that scales awkwardly.” + +What we need for JavaScript is a type system modeled more after Haskell’s, and less after Java’s. + +#### Other JavaScript Tech to Learn + +* [GraphQL](https://graphql.org/) to query services +* [Redux](https://redux.js.org/) to manage app state +* [redux-saga](https://github.com/redux-saga/redux-saga) to isolate side-effects +* [react-feature-toggles](https://github.com/paralleldrive/react-feature-toggles) to ease continuous delivery and testing +* [RITEway](https://github.com/ericelliott/riteway) for beautifully readable unit tests + +### The Rise of the Crypto Industry + +Last year I predicted that blockchain and fin-tech would be big technologies to watch in 2018. That prediction was spot on. One of the major themes of 2017–2018 was the rise of crypto and building the foundations of **the internet of value.** Remember that phrase. You’re going to hear it a lot, soon. + +If you’re like me and you’ve been following decentralized apps since the P2P explosion, this has been a long time coming. Now that Bitcoin lit the fuse and showed how decentralized apps can be self-sustaining using cryptocurrencies, the explosion is unstoppable. + +Bitcoin has grown several orders of magnitude in just a few years. You may have heard that 2018 was a “crypto winter”, and got the idea that the crypto industry is in some sort of trouble. That’s complete nonsense. What really happened was at the end of 2017, Bitcoin hit another 10x multiple in an epic exponential growth curve, and the market pulled back a bit, which happens every time the Bitcoin market cap grows another 10x. + +![](https://cdn-images-1.medium.com/max/800/1*2nlit12SUIYN93RdmBNoHQ.png) + +Bitcoin 10x Inflection Points + +In this chart, each arrow starts at another 10x point, and points to the low point on the price correction. + +Fundraising for crypto ICOs (Initial Coin Offerings) peaked in early 2018, and the 2017–2018 funding bubble brought a rush of new job openings into the ecosystem, peaking at over 10k open jobs in January 2018. It has since settled back to about 2,400 (according to Indeed.com), but we’re still very early and this party is just getting started. + +![](https://cdn-images-1.medium.com/max/800/1*FUZjNmtKuVNSAK-DnoGtoQ.png) + +There is a lot more to say about the burgeoning crypto industry, but that’s a whole other blog post. If you’re interested, read [“Blockchain Platforms and Tech to Watch in 2019”](https://medium.com/the-challenge/blockchain-platforms-tech-to-watch-in-2019-f2bfefc5c23). + +#### Other Tech to Watch + +As predicted last year, these technologies continued to explode in 2018: + +**AI/Machine Learning** is in full swing with 30k open jobs at the close of 2018, deep fakes, incredible generative art, amazing video editing capabilities from the research teams at companies like Adobe — there has never been a more exciting time to explore AI. + +**Progressive Web Applications** are quickly just becoming how modern web apps are properly built — added features and support from Google, Apple, Microsoft, Amazon, etc. It’s incredible how quickly I’m taking the PWAs on my phone for granted. For example, I don’t have the Twitter Android app installed on my phone anymore. I exclusively use [the Twitter PWA instead](https://mobile.twitter.com/home). + +**AR** (Augmented Reality) **VR** (Virtual Reality) **MR** (Mixed Reality) all got together and joined forces like Voltron to become **XR** (eXtended Realty). The future of full-time XR immersion is coming. I’m predicting within 5–10 years for mass adoption of consumer XR glasses. Contact lenses within 20. Thousands of new jobs opened up in 2018, and this industry will continue to explode in 2019. + +- YouTube 视频链接:https://youtu.be/JaiLJSyKQHk + +**Robotics, Drones, and Autonomous Vehicles** Autonomous flying drones are already here, autonomous robots continue to improve, and more autonomous vehicles are sharing the road with us at the end of 2018. These technologies will continue to grow and reshape the world around us through 2019 and into the next 20 years. + +**Quantum Computing** progressed admirably in 2018, as predicted, and as predicted, it did not go mainstream, yet. In fact, my prediction, “it may be 2019 or later before the disruption really starts” was likely very optimistic. + +Researchers in the crypto space have paid extra attention to quantum-safe encryption algorithms (quantum computing will invalidate lots of today’s assumptions about what is expensive to compute, and crypto relies on things being expensive to compute), but in spite of a constant flood of interesting research progress in 2018, a recent report [puts things into perspective](https://www.theregister.co.uk/2018/12/06/quantum_computing_slow/): + +> “Quantum computing has been on Gartner’s hype list 11 times between 2000 and 2017, each time listed in the earliest stage of the hype cycle and each time said to be more than a decade away.” + +This reminds me of early AI efforts, which began to heat up in the 1950’s, had limited but interesting success in the 1980’s and 1990’s, but only just started getting really mind-blowing circa 2010. + +* * * + +> We’re BUIDLing the future of celebrity digital collectables: [cryptobling](https://docs.google.com/forms/d/e/1FAIpQLScrRX9bHdIYbQFI5L3hEgwQaDEdjo8t8glqlyObZexWjssxNQ/viewform). + +* * * + +**_Eric Elliott_ 是 [“编写 JavaScript 应用”](http://pjabook.com)(O’Reilly)以及[“跟着 Eric Elliott 学 Javascript”](http://ericelliottjs.com/product/lifetime-access-pass/) 两书的作者。他为许多公司和组织作过贡献,例如 *Adobe Systems*、*Zumba Fitness*、*The Wall Street Journal*、*ESPN* 和 *BBC* 等,也是很多机构的顶级艺术家,包括但不限于 *Usher*、*Frank Ocean* 以及 *Metallica*。** + +大多数时间,他都在 San Francisco Bay Area,同这世上最美丽的女子在一起。 + +感谢 [JS_Cheerleader](https://medium.com/@JS_Cheerleader?source=post_page)。 + +> 如果发现译文存在错误或其他需要改进的地方,欢迎到 [掘金翻译计划](https://github.com/xitu/gold-miner) 对译文进行修改并 PR,也可获得相应奖励积分。文章开头的 **本文永久链接** 即为本文在 GitHub 上的 MarkDown 链接。 + + +--- + +> [掘金翻译计划](https://github.com/xitu/gold-miner) 是一个翻译优质互联网技术文章的社区,文章来源为 [掘金](https://juejin.im) 上的英文分享文章。内容覆盖 [Android](https://github.com/xitu/gold-miner#android)、[iOS](https://github.com/xitu/gold-miner#ios)、[前端](https://github.com/xitu/gold-miner#前端)、[后端](https://github.com/xitu/gold-miner#后端)、[区块链](https://github.com/xitu/gold-miner#区块链)、[产品](https://github.com/xitu/gold-miner#产品)、[设计](https://github.com/xitu/gold-miner#设计)、[人工智能](https://github.com/xitu/gold-miner#人工智能)等领域,想要查看更多优质译文请持续关注 [掘金翻译计划](https://github.com/xitu/gold-miner)、[官方微博](http://weibo.com/juejinfanyi)、[知乎专栏](https://zhuanlan.zhihu.com/juejinfanyi)。 From a97285cf3ce827713e289d06d55a7353c5e9803e Mon Sep 17 00:00:00 2001 From: LeviDing Date: Tue, 8 Jan 2019 22:04:20 +0800 Subject: [PATCH 34/54] Create writing-a-killer-software-engineering-resume.md --- ...ng-a-killer-software-engineering-resume.md | 416 ++++++++++++++++++ 1 file changed, 416 insertions(+) create mode 100644 TODO1/writing-a-killer-software-engineering-resume.md diff --git a/TODO1/writing-a-killer-software-engineering-resume.md b/TODO1/writing-a-killer-software-engineering-resume.md new file mode 100644 index 00000000000..72421593518 --- /dev/null +++ b/TODO1/writing-a-killer-software-engineering-resume.md @@ -0,0 +1,416 @@ +> * 原文地址:[How to write a killer Software Engineering résumé](https://medium.freecodecamp.org/writing-a-killer-software-engineering-resume-b11c91ef699d) +> * 原文作者:[Terrence Kuo](https://medium.freecodecamp.org/@terrencekuo) +> * 译文出自:[掘金翻译计划](https://github.com/xitu/gold-miner) +> * 本文永久链接:[https://github.com/xitu/gold-miner/blob/master/TODO1/writing-a-killer-software-engineering-resume.md](https://github.com/xitu/gold-miner/blob/master/TODO1/writing-a-killer-software-engineering-resume.md) +> * 译者: +> * 校对者: + +# How to write a killer Software Engineering résumé + +An in-depth analysis of the résumé that got me interviews at Google, Facebook, Amazon, Microsoft, Apple, and more. + +![](https://cdn-images-1.medium.com/max/800/1*0yse40ucjmpdlaBqlY0fTg.png) + +This résumé got me interviews at Google, Facebook, Amazon, Microsoft, and Apple. + +![](https://cdn-images-1.medium.com/max/800/1*zYjyIdGdfDPN8gKQnRvqRw.jpeg) + +2017 Senior Year Résumé + +I obtained these interviews by sending my résumé to the résumé **black hole**, also known as applying online. + +![](https://cdn-images-1.medium.com/max/800/1*pM-Aipc_Y9NzJalOhSW5OQ.jpeg) + +Applying online is the most common way people go about applying for a job and therefore the least effective way to land an interview due to competition. Yet that is exactly how I obtained all my interviews. + +How did I accomplish this? + +In this article, I will go through a line-by-line analysis of my résumé for the following purposes: + +* explaining the choices that I made in creating my résumé +* why I believe this résumé worked to help me land those interviews, and +* how you can create an even better résumé! + +I decided to write this article because I struggled a lot with landing interviews when I first started looking for a job. It would have been extremely helpful for me to have a real-life example résumé to look at. + +This article is organized into the following sections: + +1. [**The All Too Familiar Way of Not Landing an Interview**](https://medium.com/p/b11c91ef699d#9154) - a short anecdote of my frustrations when I first started applying for jobs +2. [**Evaluating the Options: Moving Forward**](https://medium.com/p/b11c91ef699d#d859) - a reflection on different strategies to improve the odds of landing interviews +3. [**Learning How to Write a Killer Résumé By Example**](https://medium.com/p/b11c91ef699d#0512) - the step-by-step analysis of my résumé with each of the following sections corresponding to my résumé: + +* [The Essentials from a Glance](https://medium.com/p/b11c91ef699d#95e6) +* [Who Are You](https://medium.com/p/b11c91ef699d#3868) +* [Contact Information](https://medium.com/p/b11c91ef699d#ebfd) +* [Education](https://medium.com/p/b11c91ef699d#3fe8) +* [Employment](https://medium.com/p/b11c91ef699d#6bbf) +* [Personal Projects](https://medium.com/p/b11c91ef699d#ed02) +* [Skills](https://medium.com/p/b11c91ef699d#1ee1) + +### The All Too Familiar Way of Not Landing an Interview + +#### Applying Online + +You probably know the link that every company provides for online applications. It’s the classic career site that shows you a bunch of job titles which you think you are totally qualified for until you open the job description and read the minimum requirements. + +![](https://cdn-images-1.medium.com/max/800/1*cpQrWe331z5_1jtlPw4rKQ.jpeg) + +Google Career Page + +A job description with a bunch of words that you have never heard of, may have heard of, or hoped you had heard of. And it has an innocent-looking “**Apply”** button**.** + +![](https://cdn-images-1.medium.com/max/800/1*pBvMe2m7SAd2-m00FMDT7w.jpeg) + +Google Job Description + +![](https://cdn-images-1.medium.com/max/800/1*-FfYacrlV7MzJpfdOGuA6A.png) + +Despite the uncertainty you may feel about your qualifications, you apply anyway because you want a job. + +So you fill out the application form, press submit — and wait and hope for a positive response. + +Your results will be varied: + +1. Phone Interview 🎉 (yay, a chance at employment!!!) +2. Immediate Rejection 😢 (darn, back to the drawing board) +3. No reply 😞 (gosh, at least give me the courtesy of having some closure) + +#### Repeat Until Success… Right? + +Sadly, this is the typical process that many people go through when looking for a job/internship. + +Apply to a couple of companies. Get a couple of rejections or no replies. Apply to a couple more companies. Get a couple more rejections or no replies. Over, and over, and over again. + +Why do we do this to ourselves? We spend all this time doing the same repetitive task to obtain the same, disappointing results. + +Because this is what everyone does to get an interview, right? Because at least we’re working towards the right direction and have a glimmer of hope, right? How else are you supposed to get an interview? + +### Evaluating the Options: Moving Forward + +> “Discouragement and failure are two of the surest stepping stones to success.” + +> - Dale Carnegie, (author of “How to Win Friends and Influence People”) + +We can think of approaching the problem of not getting interviews in two ways: + +1. Putting your application/ résumé under the microscope +2. Questioning the process in which you go about obtaining an interview + +This article focuses on the former, because no matter what avenue you end up taking to get an interview, **essentially every company utilizes your résumé as a basis for evaluation**. Therefore, we will examine my résumé under a microscope and focus on learning how to write a remarkable résumé. + +Getting an interview via online application is extremely challenging because your résumé has to pass numerous stages before it gets into the hands of the hiring manager. + +It has to bypass [online keyword filters](https://www.themuse.com/advice/a-job-hunters-guide-to-getting-your-resume-past-the-ats-and-into-human-hands), stand out to a recruiter who reviews it for about 6 seconds and meet the expectations of the hiring manager who decides whether you are worth interviewing. + +Yet, despite all those hurdles, I obtained all my interviews by applying online. **How?** _Trial and error_. I’ve applied to hundreds of different software engineering positions since my sophomore year of college. + +When I first started applying, I faced a staggering number of rejections, but over time I learned how to adapt. By the time I was a senior, I was extremely successful in landing interviews from almost every company I applied to. + +The résumé that landed me all those interviews is the **exact** one in this article. + +It took me **four years** of iteration and real-life testing to get to this point. From this experience, I have come up with a list of **résumé writing principles** to help you write an even better software résumé. These are principles that have helped me land my dream job and are principles that can help you land yours. + +While it took me **four** years of college to figure this all out, you don’t have to go through all the leaps and bounds because you can learn all of it right here, right now. + +My goal is to be the one-stop hub for all your questions on how to obtain a software engineering interview. That way, you don’t have to waste countless hours cross-referencing Google search results to find the best answer on how to write a software engineering résumé that gets interviews. + +Your valuable time could be better spent on writing your killer résumé. + +So start here and now with this article. Reap the benefits from my past experiences and let’s begin the step-by-step walkthrough of my résumé! + +### Learning How to Write a Killer Résumé — By Example + +> “As to methods there may be a million and then some, but principles are few. The man who grasps principles can successfully select his own methods. The man who tries methods, ignoring principles, is sure to have trouble.” + +> - Ralph Waldo Emerson + +Let’s take another look at this résumé: + +![](https://cdn-images-1.medium.com/max/800/1*0ZO5y_zemdcdsaEUdrI6Vw.jpeg) + +Résumé: Essential Sections Highlighted + +#### The Essentials from a Glance + +#### One-page résumé + +Recruiters do not have all day to read your résumé. On average they view it for less than 6 seconds. Keep it **short** and **concise**. + +#### Sections (Header, Education, Employment, Software Projects, Skills) + +Place sections in **order of importance** from top to bottom. The ‘[Personal Projects](https://medium.com/p/b11c91ef699d#ed02)’ section is a unique, must-have for people looking for a software engineering position. + +#### Consistent layout + font per section + +Make sure each section contains a uniform look. Consistent style is important as it enhances the readability. **Readability** is essential. + +So why does this résumé work? Let’s explore the numbered bullet points. + +#### Who Are You (1) + +_Target Audience: Anyone writing a_ résumé + +![](https://cdn-images-1.medium.com/max/800/1*LXF-gnE-zVuCku6-m6kawA.jpeg) + +Résumé: Name Section + +Starting off real easy. Your name. Place your name at the top of your résumé in a **large legible font.** + +No need to be all fancy about it with extravagant colors or fancy fonts. Plain and simple does the trick. You want the recruiter to see this easily from a mile away because you want them to know who you are. A recruiter who has to do minimum work is a happy recruiter. A happy recruiter is one who is more likely to give you an interview. + +**Recap:** Make it ridiculously easy for the recruiter to read and find your name. + +#### Contact Information (2) + +_Target Audience: Anyone writing a résumé_ + +![](https://cdn-images-1.medium.com/max/800/1*5WolbWQLpe0zXUa3WAk8uw.jpeg) + +Résumé: Contact Section + +Your contact info should be as easy as identifying your name. This is so important. Of all the things in the world, **please do not mess this one up** because how else on earth will the recruiter contact you? + +**Recap:** Put in the correct contact information or you’ll never be contacted. + +#### Education (3) + +_Target Audience: Anyone writing a résumé with a degree_ + +![](https://cdn-images-1.medium.com/max/800/1*hpg6j5IG6cMx95LFctHCFg.jpeg) + +Résumé: Education Section: Header Subsection + +If you are attending or attended college, this should be the first section of your résumé, because going to college is a huge accomplishment. According to the U.S. Bureau of Labor Statistics, only “66.7 percent of 2017 high school graduates age 16 to 24 enrolled in colleges or universities”. So be proud of it and include it! + +Right off the bat, this tells the recruiter that you are invested in education and learning, which is crucial because technology is continuously changing. Furthermore, this information serves as an indicator of your successes, so be sure to put it down. + +**Recap:** Put down where you got educated. + +![](https://cdn-images-1.medium.com/max/800/1*BRjtOe_ZZpT4xvy_72gkLw.jpeg) + +Résumé: Education Section: Coursework Subsection + +Be sure to include **relevant** coursework corresponding to the position that you are applying for. While a course on the _History of Italian Gastronomy_ sounds exceptionally appetizing, it doesn’t have a place in a résumé that is trying to get you a job in computer science. + +This will significantly improve the ability of the recruiter and the hiring manager looking at your résumé in deciding whether you are a good fit for the position. And as previously mentioned, a happy recruiter is more likely to give you an interview. + +**Recap:** Only include relevant coursework. + +![](https://cdn-images-1.medium.com/max/800/1*MNnvZOFyFCuCorWiXwqMZA.jpeg) + +Résumé: Education Section: GPA Subsection + +Okay. GPA. Before we talk about this, let’s remind ourselves of the main purpose of a résumé. + +The main purpose of a résumé is to highlight your knowledge, skills, and accomplishments succinctly. You want to include things on your résumé that you are proud of, but also things that will impress. You want to paint a picture of yourself in the best light possible so that recruiters and hiring managers want to interview you. + +Now back to your GPA. It should be fairly obvious whether or not your GPA is impressive. If your GPA is below a 3.0, don’t put it on your résumé. There’s nothing wrong with excluding your GPA from your résumé if it only harms your chances. + +If you have a GPA between 3.0–3.2, this is a judgment call. From personal experience, I have talked to some companies that require a minimum GPA of 3.2, but these were primarily financial or quantitative companies. Most software companies have little regard for your GPA. If you have anything above a 3.2, I would place it on your résumé. + +If you have a low GPA, fear not, as this gives you the opportunity to be creative! My overall GPA was a 3.2 due to poor grades from my freshman engineering prerequisites and humanities classes. But once I finished and started taking courses within my major, my in-major GPA (GPA calculated from courses in my major) was a 3.44, which was significantly higher. So that’s what I put down (but make sure to qualify it as a departmental GPA). + +There are many ways of going about presenting yourself in the best light possible, even when it may seem like the odds are stacked against you. I only provided one example of accomplishing this, but there are many more ways waiting to be discovered by you. Fully embrace your failures and accomplishments because they make up who are you. Be honest and truthful, and always focus on highlighting the best parts about yourself. + +**Recap:** Your GPA does not define you. The purpose of your résumé is to present yourself in the best light. Never forget that! Be creative when going about this and DON’T LIE. + +#### Employment (4) + +_Target Audience: Students with software engineering work experience_ + +![](https://cdn-images-1.medium.com/max/800/1*L7Yd5wDpNVO5hhwgv054Hw.jpeg) + +Résumé: Employment Section + +If you are a college student without any experience, don’t be afraid! This was my senior year résumé when I was applying for a full-time job. I was fortunate enough to have accumulated relevant work experience from summer internships, but this isn’t absolutely necessary to get an interview. If you find that you don’t have much to put in section, jump down to the ‘[Personal Projects](https://medium.com/p/b11c91ef699d#ed02)’ section. + +![](https://cdn-images-1.medium.com/max/800/1*EXOSXDhs2gHZPiIVtpDtxg.jpeg) + +Résumé: Employment Section: Header Subsection + +While it is great to have past work experience, not all work experience is treated equally when it comes to looking for a job in software engineering. Focus only on including work experience that has _relevance_ to the job that you are applying for. For instance, if you have experience working as a cashier in retail or a waiter in the food industry, don’t include it! Unfortunately, your abilities to handle money or serve food did not provide any indication that you will succeed as a software engineer. + +A recruiter’s goal is to match candidates with jobs that fit the candidates’ skill sets. Therefore it is essential only to include past work experience that has some relation to the position that you are currently applying to, on your résumé. + +Part of accomplishing this means creating a collection of various résumés, each tailored specifically for the different job that you are interested in. This is analogous to the college application process, where you had to write separate essays for each university that you applied to. Each college has its own values, culture, and vision, making it nearly impossible to write a generic, one-size-fits-all college essay. Therefore, tailor your résumé to the job that you are applying for. + +Lastly, a note on dates. Order your experiences in descending order starting with your most recent experiences. For undergraduates, this means being mindful of including experiences that are both recent and relevant. Sadly, no one cares about whatever accomplishments you had in middle school or high school. If the experience is outdated, leave it out. + +**Recap:** Have various versions of your résumé tailored for each job you are applying for. There is no one-size-fits-all résumé. + +![](https://cdn-images-1.medium.com/max/800/1*NedTfy9JUsT7Ta_6WX_Yuw.jpeg) + +Résumé: Employment Section: Description Subsection + +The hardest part about résumé writing is having descriptions that fully encapsulate your accomplishments from past work experiences in a meaningful and impressive way. + +What does it mean for your descriptions to be meaningful and impressive? It means getting the recruiter to think: “This is someone that has the skill sets we are looking for. This is someone that has made a significant impact in their past jobs. This is someone we would like to interview and potentially hire.” + +**The primary objective of the Employment section is to show the impact and value that you had while working at an established institution.** Your goal is to show recruiters that you are a candidate that can get things done and do them well. + +To best showcase my accomplishments in my résumé, I adopted the following powerful formula, created by the Former SVP of People Operations at Google, Laszlo Bock: + +> “Accomplished [X] as measured by [Y] by doing [Z]” — Laszlo Bock + +You can see this very clearly in the very first bullet point of this section on my résumé. + +**Improved device’s battery lifespan by 8% by integrating a fuel gauge sensor and establishing a battery saving state** + +Let’s break it down: + +**Accomplished [X]-** Improved device’s battery lifespan + +**Measured by [Y]-** by 8% + +**By Doing [Z]-** integrating a fuel gauge sensor and establishing a battery saving state + +I leveraged this formula in some shape or form in almost every sentence in my résumé. + +To help you along this process, below is a word bank of excellent verbs you can and should use: + +![](https://cdn-images-1.medium.com/max/800/1*aAEYhAGQkVE7g4LP3sZgYw.png) + +Verb Wordbank + +Here are some examples of fill-in the blank sentences that I have come up with for you to get started: + +* Reduced _____ by _____ by _____. +* Redesigned _____ for _____. +* Implemented _____ for _____ by _____. +* Improved _____ by _____ through _____. +* Utilized _____ to _____ for _____. +* Increased _____ by _____ through _____. +* Integrated _____ by _____ for _____. +* Incorporated _____ for _____ by _____. + +**Recap:** Use the “Accomplished [X] as measured by [Y] by doing [Z]” formula. It’s the most effective and most apparent way of showing recruiters/managers your impact. + +![](https://cdn-images-1.medium.com/max/800/1*bn3b7uBhxySeTOCxSwqv2Q.jpeg) + +Résumé: Employment Section: Leveraged Knowledge Subsection + +Lastly, I end each work experience with a **leveraged knowledge** bullet point. The utility behind this last bit is it enables the reader to really get a sense of the technology I am familiar with by explicitly stating the technologies that I used for the project. + +This also allows me to have a concise, but clean ‘Skills’ section located at the bottom of my résumé. Recruiters can then look at the bottom to immediately obtain a sense of my capabilities by seeing which computer languages I am familiar with. If they are looking to see if I have specific knowledge in a particular tool, framework, or library, then they can find this out by looking at my projects. + +**Recap:** Including technologies that you used in your descriptions will help you bypass online keyword filters when applying online. This will also give recruiters a clearer idea of your experiences and knowledge. + +#### Personal Projects (5) + +_Target Audience: Students looking for software engineering internships/full-time positions + Unique section for software engineering applicants_ + +![](https://cdn-images-1.medium.com/max/800/1*bA0WIrjSBHABk3ZOn5dOdQ.jpeg) + +Résumé: Personal Projects Section + +![](https://cdn-images-1.medium.com/max/800/1*bCXKQWaNymLxQs92qbUPkw.jpeg) + +Maybe if I say it enough times, you will understand the importance of this section, **especially for those that do not have work experience**. + +> Personal projects are integral to piquing recruiters and hiring managers interest as it shows you are passionate about programming. + +A personal project can be anything programming related, whether it be a Python script, Java program, web page, mobile application, etc. These projects show that you are genuinely interested in computer science and you have strong desires to work as a software engineer because you are willing to go beyond your schoolwork and create something on your own. + +Taking the initiative to build something on your own is extremely impressive. It shows that you are dedicated to expanding your knowledge of computer science and that you are not afraid of putting in the extra work to do so. Ultimately, it is a fantastic way to demonstrate self-initiative and genuine interest in this field. + +The other benefit of doing personal projects is that you inevitably gain the skills that apply to work in the real world. Things that you don’t usually do at school, but you will do at work such as using standard frameworks/libraries, understanding full-stack web development, creating mobile applications, setting up a development environment, or programming efficiently with Vim. + +> **Tip:** Create a personal website that showcases and documents all of your personal projects. This is a little hack that ‘virtually extends’ your résumé beyond the one-page limit. + +To reiterate one last time, personal projects show your passion and dedication towards developing the necessary skills need for a job that you don’t yet have. This is a **must-have** on any software engineering resume. + +> “Build some iPhone apps, web apps, whatever! Honestly it doesn’t matter that much what you’re building as long as you’re building something. You can build a fairly meaty project in one weekend. This means that with about 3–4 weekends of work, you can make your résumé go from so-so to fantastic. Seriously — I’ve seen lots of people do this.” + +> - Gayle McDowell, former Google Engineer and Author of Cracking the Coding Interview + +If there is a specific company that you **really** want to work at, one of the best ways to stand out is doing a personal project that is directly related to the job that you are applying for. + +I got my internship at Autodesk by taking a free online interactive computer graphics course on Udacity. The course taught me to use a JavaScript library called _three.js_, and it just so happened that there was a software engineering internship opening at Autodesk looking for someone with full-stack website and knowledge in _three.js_ (aka ME). + +A word of caution on this technique. This strategy is not perfect. This only really works for companies like Autodesk which do not have generalized software engineering internships like Google, Facebook, and Microsoft. When starting off early in your career, it is better to generalize and figure out the different disciplines of computer science. Nonetheless, this is an excellent method worth trying if there is a specific company you want. + +**Recap:** Personal projects are imperative. If you haven’t already, start NOW! You have nothing to lose and everything to gain. + +#### Skills (6) + +_Target Audience: Anyone looking for a software engineering job_ + +![](https://cdn-images-1.medium.com/max/800/1*gexMwyf4Q94yfYEzDF2O0Q.jpeg) + +Résumé: Skill Section + +The title explains it all. Keep this section dumb, simple, and clean. List all the relevant skills that you want the recruiter to know you have. The more skills you have listed here that match key technical words in the qualification section of the job description, the better your chances! + +This will allow you to bypass the online keyword scanner easily. **However, this is not a fool-proof method of circumventing the scanner.** Ultimately it is a recruiter who gets their hands on your résumé that decides, but they will also be more inclined to give you an interview if they see you as a good fit for the job! + +A thing to note about the skills section is to NOT simply list all the keywords on the job description just for the sake of showing you’re a good fit. It will come back to bite you as you will be questioned on the skills you claim to know. + +Part of giving yourself some leeway in this is including an indication of your proficiency level. Since you are probably not practicing every language you’ve ever encountered on a day to day basis, including a proficiency level can help the recruiter know your strongest languages at a moment in time and other languages that you are familiar with. + +I’ve opted to use two tiers: + +1. **Proficient** - Languages that I am very familiar with, feel very comfortable using, and can interview with right now. +2. **Familiar** - Languages that I have utilized in the past but may not be as knowledgeable in currently, but can pick back up given time. + +Other valid options include: + +1. Advanced +2. Intermediate +3. Basic + +or + +1. Expert +2. Advanced +3. Intermediate + +or + +1. Fluent +2. Proficient +3. Familiar + +or + +1. Working Knowledge +2. Basic Knowledge + +**Recap:** When applying for a specific job online, cross-reference the job description and add essential technical keywords on your résumé to increase your chance of getting an interview. + +### Key Takeaways + +* Make sure your name and contact information is correct and legible +* Be sure to include your education. If your GPA is low, leave it out or be creative! +* Utilize the “Accomplished [X] as measured by [Y] by doing [Z]” formula to effectively show the impact that you had in your past employment +* Do personal projects — especially if you do not have past experience working in tech + +### Final Thoughts + +While this résumé got me interviews at numerous software engineering companies, there is no guarantee that following all the principles and techniques I have explained here will yield the same results for you. + +This was my senior year résumé in 2017. It is a showcase of my journey and interest in software engineering. Copying it will do you no good, as technology is constantly evolving and the talent search is an ever-changing process. Instead, use this as a reference. + +Use my résumé and this article as a resource to become a better résumé writer and a more effective communicator. Focus on learning how to best convey your skills and achievements to others. This in itself is an invaluable, lifelong skill that you will need wherever you go. + +As you write your résumé, please remember — be yourself! + +Your résumé is a list of **your** own accomplishments, achievements, and interests. Your goal is to craft the most polished version of yourself. Lastly, have fun and enjoy the process! + +* * * + +For anyone interested in using this resume template, I obtained it from [CareerCup](https://careercup.com/resume) which was founded by Gayle Laakmann McDowell, author of _Cracking The Coding Interview_. + +From personal experience, what’s **most important** is the content. The resume writing principles discussed above can be applied to any template! + +* * * + +10.24.18 - Thank you all for your support. For a **limited time only**, I will be randomly selecting **3 people** from my newsletter at the end of each week and offering them the opportunity to get **free feedback** on their resumes. If you are a student looking for a summer internship, this could be a great opportunity for you! + +[Click here to subscribe](https://goo.gl/forms/qbQrZ6LNXsnPW0j42). + +> 如果发现译文存在错误或其他需要改进的地方,欢迎到 [掘金翻译计划](https://github.com/xitu/gold-miner) 对译文进行修改并 PR,也可获得相应奖励积分。文章开头的 **本文永久链接** 即为本文在 GitHub 上的 MarkDown 链接。 + + +--- + +> [掘金翻译计划](https://github.com/xitu/gold-miner) 是一个翻译优质互联网技术文章的社区,文章来源为 [掘金](https://juejin.im) 上的英文分享文章。内容覆盖 [Android](https://github.com/xitu/gold-miner#android)、[iOS](https://github.com/xitu/gold-miner#ios)、[前端](https://github.com/xitu/gold-miner#前端)、[后端](https://github.com/xitu/gold-miner#后端)、[区块链](https://github.com/xitu/gold-miner#区块链)、[产品](https://github.com/xitu/gold-miner#产品)、[设计](https://github.com/xitu/gold-miner#设计)、[人工智能](https://github.com/xitu/gold-miner#人工智能)等领域,想要查看更多优质译文请持续关注 [掘金翻译计划](https://github.com/xitu/gold-miner)、[官方微博](http://weibo.com/juejinfanyi)、[知乎专栏](https://zhuanlan.zhihu.com/juejinfanyi)。 From c841d34565af74bd09b83652dbc8dcd3db9bee1f Mon Sep 17 00:00:00 2001 From: LeviDing Date: Tue, 8 Jan 2019 22:30:38 +0800 Subject: [PATCH 35/54] Create accepting-payments-with-stripe-vuejs-and-flask.md --- ...ng-payments-with-stripe-vuejs-and-flask.md | 1099 +++++++++++++++++ 1 file changed, 1099 insertions(+) create mode 100644 TODO1/accepting-payments-with-stripe-vuejs-and-flask.md diff --git a/TODO1/accepting-payments-with-stripe-vuejs-and-flask.md b/TODO1/accepting-payments-with-stripe-vuejs-and-flask.md new file mode 100644 index 00000000000..3a498702739 --- /dev/null +++ b/TODO1/accepting-payments-with-stripe-vuejs-and-flask.md @@ -0,0 +1,1099 @@ +> * 原文地址:[Accepting Payments with Stripe, Vue.js, and Flask](https://testdriven.io/blog/accepting-payments-with-stripe-vuejs-and-flask/) +> * 原文作者:[Michael Herman](https://testdriven.io/authors/herman/) +> * 译文出自:[掘金翻译计划](https://github.com/xitu/gold-miner) +> * 本文永久链接:[https://github.com/xitu/gold-miner/blob/master/TODO1/accepting-payments-with-stripe-vuejs-and-flask.md](https://github.com/xitu/gold-miner/blob/master/TODO1/accepting-payments-with-stripe-vuejs-and-flask.md) +> * 译者: +> * 校对者: + +# Accepting Payments with Stripe, Vue.js, and Flask + +![](https://testdriven.io/static/images/blog/flask-vue-stripe/payments_vue_flask.png) + +In this tutorial, we'll develop a web app for selling books using [Stripe](https://stripe.com/) (for payment processing), [Vue.js](https://vuejs.org/) (the client-side app), and [Flask](http://flask.pocoo.org/) (the server-side API). + +> This is an intermediate-level tutorial. It assumes that you a have basic working knowledge of Vue and Flask. Review the following resources for more info: +> +> 1. [Introduction to Vue](https://vuejs.org/v2/guide/index.html) +> 2. [Flaskr: Intro to Flask, Test-Driven Development (TDD), and JavaScript](https://github.com/mjhea0/flaskr-tdd) +> 3. [Developing a Single Page App with Flask and Vue.js](https://testdriven.io/developing-a-single-page-app-with-flask-and-vuejs) + +_Final app_: + +![final app](https://testdriven.io/static/images/blog/flask-vue-stripe/final.gif) + +_Main dependencies:_ + +* Vue v2.5.2 +* Vue CLI v2.9.3 +* Node v10.3.0 +* NPM v6.1.0 +* Flask v1.0.2 +* Python v3.6.5 + +## Contents + +* [Objectives](#objectives) +* [Project Setup](#project-setup) +* [What are we building?](#what-are-we-building) +* [Books CRUD](#books-crud) +* [Order Page](#order-page) +* [Form Validation](#form-validation) +* [Stripe](#stripe) +* [Order Complete Page](#order-complete-page) +* [Conclusion](#conclusion) + +## Objectives + +By the end of this tutorial, you should be able to... + +1. Work with an existing CRUD app, powered by Vue and Flask +2. Create an order checkout component +3. Validate a form with vanilla JavaScript +4. Use Stripe to validate credit card information +5. Process payments using the Stripe API + +## Project Setup + +Clone the [flask-vue-crud](https://github.com/testdrivenio/flask-vue-crud) repo, and then check out the [v1](https://github.com/testdrivenio/flask-vue-crud/releases/tag/v1) tag to the master branch: + +``` +$ git clone https://github.com/testdrivenio/flask-vue-crud --branch v1 --single-branch +$ cd flask-vue-crud +$ git checkout tags/v1 -b master +``` + +Create and activate a virtual environment, and then spin up the Flask app: + +``` +$ cd server +$ python3.6 -m venv env +$ source env/bin/activate +(env)$ pip install -r requirements.txt +(env)$ python app.py +``` + +> The above commands, for creating and activating a virtual environment, may differ depending on your environment and operating system. + +Point your browser of choice at [http://localhost:5000/ping](http://localhost:5000/ping). You should see: + +``` +"pong!" +``` + +Then, install the dependencies and run the Vue app in a different terminal tab: + +``` +$ cd client +$ npm install +$ npm run dev +``` + +Navigate to [http://localhost:8080](http://localhost:8080). Make sure the basic CRUD functionality works as expected: + +![v1 app](/static/images/blog/flask-vue-stripe/v1.gif) + +> Want to learn how to build this project? Check out the [Developing a Single Page App with Flask and Vue.js](https://testdriven.io/developing-a-single-page-app-with-flask-and-vuejs) blog post. + +## What are we building? + +Our goal is to build a web app that allows end users to purchase books. + +The client-side Vue app will display the books available for purchase, collect payment information, obtain a token from Stripe, and send that token along with the payment info to the server-side. + +The Flask app then takes that info, packages it together, and sends it to Stripe to process charges. + +Finally, we'll use a client-side Stripe library, [Stripe.js](https://stripe.com/docs/stripe-js/v2), to generate a unique token for creating a charge and a server-side Python [library](https://github.com/stripe/stripe-python) for interacting with the Stripe API. + +![final app](https://testdriven.io/static/images/blog/flask-vue-stripe/final.gif) + +> Like the previous [tutorial](https://testdriven.io/developing-a-single-page-app-with-flask-and-vuejs), we'll only be dealing with the happy path through the app. Check your understanding by incorporating proper error-handling on your own. + +## Books CRUD + +First, let's add a purchase price to the existing list of books on the server-side and update the appropriate CRUD functions on the client - GET, POST, and PUT. + +### GET + +Start by adding the `price` to each dict in the `BOOKS` list in _server/app.py_: + +``` +BOOKS = [ + { + 'id': uuid.uuid4().hex, + 'title': 'On the Road', + 'author': 'Jack Kerouac', + 'read': True, + 'price': '19.99' + }, + { + 'id': uuid.uuid4().hex, + 'title': 'Harry Potter and the Philosopher\'s Stone', + 'author': 'J. K. Rowling', + 'read': False, + 'price': '9.99' + }, + { + 'id': uuid.uuid4().hex, + 'title': 'Green Eggs and Ham', + 'author': 'Dr. Seuss', + 'read': True, + 'price': '3.99' + } +] +``` + +Then, update the table in the `Books` component, _client/src/components/Books.vue_, to display the purchase price: + +``` + + + + + + + + + + + + + + + + + + + +
TitleAuthorRead?Purchase Price
{{ book.title }}{{ book.author }} + Yes + No + ${{ book.price }} + + +
+``` + +You should now see: + +![default vue app](https://testdriven.io/static/images/blog/flask-vue-stripe/price.png) + +### POST + +Add a new `b-form-group` to the `addBookModal`, between the author and read `b-form-group`s: + +``` + + + + +``` + +The modal should now look like: + +``` + + + + + + + + + + + + + + + + + + Read? + + + Submit + Reset + + +``` + +Then, add `price` to the state: + +``` +addBookForm: { + title: '', + author: '', + read: [], + price: '', +}, +``` + +The state is now bound to the form's input value. Think about what this means. When the state is updated, the form input will be updated as well - and vice versa. Here's an example of this in action with the [vue-devtools](https://github.com/vuejs/vue-devtools) browser extension: + +![state model bind](https://testdriven.io/static/images/blog/flask-vue-stripe/state-model-bind.gif) + +Add the `price` to the `payload` in the `onSubmit` method like so: + +``` +onSubmit(evt) { + evt.preventDefault(); + this.$refs.addBookModal.hide(); + let read = false; + if (this.addBookForm.read[0]) read = true; + const payload = { + title: this.addBookForm.title, + author: this.addBookForm.author, + read, // property shorthand + price: this.addBookForm.price, + }; + this.addBook(payload); + this.initForm(); +}, +``` + +Update `initForm` to clear out the value after the end user submits the form or clicks the "reset" button: + +``` +initForm() { + this.addBookForm.title = ''; + this.addBookForm.author = ''; + this.addBookForm.read = []; + this.addBookForm.price = ''; + this.editForm.id = ''; + this.editForm.title = ''; + this.editForm.author = ''; + this.editForm.read = []; +}, +``` + +Finally, update the route in _server/app.py_: + +``` +@app.route('/books', methods=['GET', 'POST']) +def all_books(): + response_object = {'status': 'success'} + if request.method == 'POST': + post_data = request.get_json() + BOOKS.append({ + 'id': uuid.uuid4().hex, + 'title': post_data.get('title'), + 'author': post_data.get('author'), + 'read': post_data.get('read'), + 'price': post_data.get('price') + }) + response_object['message'] = 'Book added!' + else: + response_object['books'] = BOOKS + return jsonify(response_object) +``` + +Test it out! + +![add book](https://testdriven.io/static/images/blog/flask-vue-stripe/add-book.gif) + +> Don't forget to handle errors on both the client and server! + +### PUT + +Do the same, on your own, for editing a book: + +1. Add a new form input to the modal +2. Update `editForm` in the state +3. Add the `price` to the `payload` in the `onSubmitUpdate` method +4. Update `initForm` +5. Update the server-side route + +> Need help? Review the previous section again. You can also grab the final code from the [flask-vue-crud](https://github.com/testdrivenio/flask-vue-crud) repo. + +![edit book](https://testdriven.io/static/images/blog/flask-vue-stripe/edit-book.gif) + +## Order Page + +Next, let's add an order page where users will be able to enter their credit card information to purchase a book. + +TODO: add image + +### Add a purchase button + +Start by adding a "purchase" button to the `Books` component, just below the "delete" button: + +``` + + + + + Purchase + + +``` + +Here, we used the [router-link](https://router.vuejs.org/api/#router-link) component to generate an anchor tag that links back to a route in _client/src/router/index.js_, which we'll set up shortly. + +![default vue app](https://testdriven.io/static/images/blog/flask-vue-stripe/purchase-button.png) + +### Create the template + +Add a new component file called _Order.vue_ to "client/src/components": + +``` + +``` + +> You'll probably want to collect the buyer's contact details, like first and last name, email address, shipping address, and so on. Do this on your own. + +### Add the route + +_client/src/router/index.js_: + +``` +import Vue from 'vue'; +import Router from 'vue-router'; +import Ping from '@/components/Ping'; +import Books from '@/components/Books'; +import Order from '@/components/Order'; + +Vue.use(Router); + +export default new Router({ + routes: [ + { + path: '/', + name: 'Books', + component: Books, + }, + { + path: '/order/:id', + name: 'Order', + component: Order, + }, + { + path: '/ping', + name: 'Ping', + component: Ping, + }, + ], + mode: 'hash', +}); +``` + +Test it out. + +![order page](https://testdriven.io/static/images/blog/flask-vue-stripe/order-page.gif) + +### Get the product info + +Next, let's update the placeholders for the book title and amount on the order page: + +![order page](https://testdriven.io/static/images/blog/flask-vue-stripe/order-page-placeholders.png) + +Hop back over to the server-side and update the following route handler: + +``` +@app.route('/books/', methods=['GET', 'PUT', 'DELETE']) +def single_book(book_id): + response_object = {'status': 'success'} + if request.method == 'GET': + # TODO: refactor to a lambda and filter + return_book = '' + for book in BOOKS: + if book['id'] == book_id: + return_book = book + response_object['book'] = return_book + if request.method == 'PUT': + post_data = request.get_json() + remove_book(book_id) + BOOKS.append({ + 'id': uuid.uuid4().hex, + 'title': post_data.get('title'), + 'author': post_data.get('author'), + 'read': post_data.get('read'), + 'price': post_data.get('price') + }) + response_object['message'] = 'Book updated!' + if request.method == 'DELETE': + remove_book(book_id) + response_object['message'] = 'Book removed!' + return jsonify(response_object) +``` + +Now, we can hit this route to add the book information to the order page within the `script` section of the component: + +``` + +``` + +> Shipping to production? You will want to use an environment variable to dynamically set the base server-side URL (which is currently `http://localhost:5000`). Review the [docs](https://vuejs-templates.github.io/webpack/env.html) for more info. + +Then, update the first `ul` in the template: + +``` +
    +
  • Book Title: {{ book.title }}
  • +
  • Amount: ${{ book.price }}
  • +
+``` + +You should now see: + +![order page](https://testdriven.io/static/images/blog/flask-vue-stripe/order-page-sans-placeholders.png) + +## Form Validation + +Let's set up some basic form validation. + +Use the `v-model` directive to [bind](https://vuejs.org/v2/guide/forms.html) form input values back to the state: + +``` +
+
+ + +
+
+ +
+
+ + +
+ +
+``` + +Add the card to the state like so: + +``` +card: { + number: '', + cvc: '', + exp: '', +}, +``` + +Next, update the "submit" button so that when the button is clicked, the normal browser behavior is [ignored](https://vuejs.org/v2/guide/events.html#Event-Modifiers) and a `validate` method is called instead: + +``` + +``` + +Add an array to the state to hold any validation errors: + +``` +data() { + return { + book: { + title: '', + author: '', + read: [], + price: '', + }, + card: { + number: '', + cvc: '', + exp: '', + }, + errors: [], + }; +}, +``` + +Just below the form, we can iterate and display the errors: + +``` +
+
+
    +
  1. + {{ error }} +
  2. +
+
+``` + +Add the `validate` method: + +``` +validate() { + this.errors = []; + let valid = true; + if (!this.card.number) { + valid = false; + this.errors.push('Card Number is required'); + } + if (!this.card.cvc) { + valid = false; + this.errors.push('CVC is required'); + } + if (!this.card.exp) { + valid = false; + this.errors.push('Expiration date is required'); + } + if (valid) { + this.createToken(); + } +}, +``` + +Since all fields are required, we are simply validating that each field has a value. Keep in mind that Stripe will validate the actual credit card info, which you'll see in the next section, so you don't need to go overboard with form validation. That said, be sure to validate any additional fields that you may have added on your own. + +Finally, add a `createToken` method: + +``` +createToken() { + // eslint-disable-next-line + console.log('The form is valid!'); +}, +``` + +Test this out. + +![form validation](https://testdriven.io/static/images/blog/flask-vue-stripe/form-validation.gif) + +## Stripe + +Sign up for a [Stripe](https://stripe.com) account, if you don't already have one, and grab the _test mode_ [API Publishable key](https://stripe.com/docs/keys). + +![stripe dashboard](https://testdriven.io/static/images/blog/flask-vue-stripe/stripe-dashboard-keys-publishable.png) + +### Client-side + +Add the key to the state along with `stripeCheck` (which will be used to disable the submit button): + +``` +data() { + return { + book: { + title: '', + author: '', + read: [], + price: '', + }, + card: { + number: '', + cvc: '', + exp: '', + }, + errors: [], + stripePublishableKey: 'pk_test_aIh85FLcNlk7A6B26VZiNj1h', + stripeCheck: false, + }; +}, +``` + +> Make sure to add your own Stripe key to the above code. + +Again, if the form is valid, the `createToken` method is triggered, which validates the credit card info (via [Stripe.js](https://stripe.com/docs/stripe-js/v2)) and then either returns an error (if invalid) or a unique token (if valid): + +``` +createToken() { + this.stripeCheck = true; + window.Stripe.setPublishableKey(this.stripePublishableKey); + window.Stripe.createToken(this.card, (status, response) => { + if (response.error) { + this.stripeCheck = false; + this.errors.push(response.error.message); + // eslint-disable-next-line + console.error(response); + } else { + // pass + } + }); +}, +``` + +If there are no errors, we send the token to the server, where we'll charge the card, and then send the user back to the main page: + +``` +createToken() { + this.stripeCheck = true; + window.Stripe.setPublishableKey(this.stripePublishableKey); + window.Stripe.createToken(this.card, (status, response) => { + if (response.error) { + this.stripeCheck = false; + this.errors.push(response.error.message); + // eslint-disable-next-line + console.error(response); + } else { + const payload = { + book: this.book, + token: response.id, + }; + const path = 'http://localhost:5000/charge'; + axios.post(path, payload) + .then(() => { + this.$router.push({ path: '/' }); + }) + .catch((error) => { + // eslint-disable-next-line + console.error(error); + }); + } + }); +}, +``` + +Update `createToken()` with the above code, and then add [Stripe.js](https://stripe.com/docs/stripe-js/v2) to _client/index.html_: + +``` + + + + + + Books! + + +
+ + + + +``` + +> Stripe supports v2 and v3 ([Stripe Elements](https://stripe.com/elements)) of Stripe.js. If you're curious about Stripe Elements and how you can integrate it into Vue, refer to the following resources: 1. [Stripe Elements Migration Guide](https://stripe.com/docs/stripe-js/elements/migrating) 1\. [Integrating Stripe Elements and Vue.js to Set Up a Custom Payment Form](https://alligator.io/vuejs/stripe-elements-vue-integration/) + +Now, when `createToken` is triggered, `stripeCheck` is set to `true`. To prevent duplicate charges, let's disable the "submit" button when `stripeCheck` is `true`: + +``` + +``` + +Test out the Stripe validation for invalid: + +1. Credit card numbers +2. Security codes +3. Expiration dates + +![stripe-form validation](https://testdriven.io/static/images/blog/flask-vue-stripe/stripe-form-validation.gif) + +Now, let's get the server-side route set up. + +### Server-side + +Install the [Stripe](https://pypi.org/project/stripe/) library: + +``` +$ pip install stripe==1.82.1 +``` + +Add the route handler: + +``` +@app.route('/charge', methods=['POST']) +def create_charge(): + post_data = request.get_json() + amount = round(float(post_data.get('book')['price']) * 100) + stripe.api_key = os.environ.get('STRIPE_SECRET_KEY') + charge = stripe.Charge.create( + amount=amount, + currency='usd', + card=post_data.get('token'), + description=post_data.get('book')['title'] + ) + response_object = { + 'status': 'success', + 'charge': charge + } + return jsonify(response_object), 200 +``` + +Here, given the book price (which we converted to cents), the unique token (from the `createToken` method on the client), and the book title, we generated a new Stripe charge with the [API Secret key](https://stripe.com/docs/keys). + +> For more on creating a charge, refer to the official API [docs](https://stripe.com/docs/api#create_charge). + +Update the imports: + +``` +import os +import uuid + +import stripe +from flask import Flask, jsonify, request +from flask_cors import CORS +``` + +Grab the _test-mode_ [API Secret key](https://stripe.com/docs/keys): + +![stripe dashboard](https://testdriven.io/static/images/blog/flask-vue-stripe/stripe-dashboard-keys-secret.png) + +Set it as an environment variable: + +``` +$ export STRIPE_SECRET_KEY=sk_test_io02FXL17hrn2TNvffanlMSy +``` + +> Make sure to use your own Stripe key! + +Test it out! + +![purchase a book](https://testdriven.io/static/images/blog/flask-vue-stripe/purchase.gif) + +You should see the purchase back in the [Stripe Dashboard](https://dashboard.stripe.com/): + +![stripe dashboard](https://testdriven.io/static/images/blog/flask-vue-stripe/stripe-dashboard-payments.png) + +Instead of just creating a charge, you may want to also create a [customer](https://stripe.com/docs/api#customers). This has many advantages. You can charge multiple items to the same customer, making it easier to track customer purchase history. You could offer deals to customers that purchase frequently or reach out to customers that haven't purchased in a while, just to name a few. It also helps to prevent fraud. Refer to the following Flask [example](https://stripe.com/docs/checkout/flask) to see how to add customer creation. + +## Order Complete Page + +Rather than sending the buyer back to the main page, let's redirect them to an order complete page, thanking them for making a purchase. + +Add a new component file called _OrderComplete.vue_ to "client/src/components": + +``` + +``` + +Update the router: + +``` +import Vue from 'vue'; +import Router from 'vue-router'; +import Ping from '@/components/Ping'; +import Books from '@/components/Books'; +import Order from '@/components/Order'; +import OrderComplete from '@/components/OrderComplete'; + +Vue.use(Router); + +export default new Router({ + routes: [ + { + path: '/', + name: 'Books', + component: Books, + }, + { + path: '/order/:id', + name: 'Order', + component: Order, + }, + { + path: '/complete', + name: 'OrderComplete', + component: OrderComplete, + }, + { + path: '/ping', + name: 'Ping', + component: Ping, + }, + ], + mode: 'hash', +}); +``` + +Update the redirect in the `createToken` method: + +``` +createToken() { + this.stripeCheck = true; + window.Stripe.setPublishableKey(this.stripePublishableKey); + window.Stripe.createToken(this.card, (status, response) => { + if (response.error) { + this.stripeCheck = false; + this.errors.push(response.error.message); + // eslint-disable-next-line + console.error(response); + } else { + const payload = { + book: this.book, + token: response.id, + }; + const path = 'http://localhost:5000/charge'; + axios.post(path, payload) + .then(() => { + this.$router.push({ path: '/complete' }); + }) + .catch((error) => { + // eslint-disable-next-line + console.error(error); + }); + } + }); +}, +``` + +![final app](https://testdriven.io/static/images/blog/flask-vue-stripe/final.gif) + +Finally, you could also display info about the book (title, amount, etc.) the customer just purchased on the order complete page. + +Grab the unique charge id and pass it into the `path`: + +``` +createToken() { + this.stripeCheck = true; + window.Stripe.setPublishableKey(this.stripePublishableKey); + window.Stripe.createToken(this.card, (status, response) => { + if (response.error) { + this.stripeCheck = false; + this.errors.push(response.error.message); + // eslint-disable-next-line + console.error(response); + } else { + const payload = { + book: this.book, + token: response.id, + }; + const path = 'http://localhost:5000/charge'; + axios.post(path, payload) + .then((res) => { + // updates + this.$router.push({ path: `/complete/${res.data.charge.id}` }); + }) + .catch((error) => { + // eslint-disable-next-line + console.error(error); + }); + } + }); +}, +``` + +Update the client-side route: + +``` +{ + path: '/complete/:id', + name: 'OrderComplete', + component: OrderComplete, +}, +``` + +Then, in _OrderComplete.vue_, grab the charge id for the URL and send it to the server-side: + +``` + +``` + +Configure the new route on the server to [retrieve](https://stripe.com/docs/api#retrieve_charge) the charge: + +``` +@app.route('/charge/') +def get_charge(charge_id): + stripe.api_key = os.environ.get('STRIPE_SECRET_KEY') + response_object = { + 'status': 'success', + 'charge': stripe.Charge.retrieve(charge_id) + } + return jsonify(response_object), 200 +``` + +Finally, update the `

` in the template: + +``` +

Thanks for purchasing - {{ this.book }}!

+``` + +Test it out one last time. + +## Conclusion + +That's it! Be sure to review the objectives from the top. You can find the final code in the [flask-vue-crud](https://github.com/testdrivenio/flask-vue-crud) repo on GitHub. + +Looking for more? + +1. Add client and server-side unit and integration tests. +2. Create a shopping cart so customers can purchase more than one book at a time. +3. Add Postgres to store the books and the orders. +4. Containerize Vue and Flask (and Postgres, if you add it) with Docker to simplify the development workflow. +5. Add images to the books and create a more robust product page. +6. Capture emails and send email confirmations (review [Sending Confirmation Emails with Flask, Redis Queue, and Amazon SES](https://testdriven.io/sending-confirmation-emails-with-flask-rq-and-ses)). +7. Deploy the client-side static files to AWS S3 and the server-side app to an EC2 instance. +8. Going into production? Think about the best way to update the Stripe keys so they are dynamic based on the environment. +9. Create a separate component for checking out. + +> 如果发现译文存在错误或其他需要改进的地方,欢迎到 [掘金翻译计划](https://github.com/xitu/gold-miner) 对译文进行修改并 PR,也可获得相应奖励积分。文章开头的 **本文永久链接** 即为本文在 GitHub 上的 MarkDown 链接。 + + +--- + +> [掘金翻译计划](https://github.com/xitu/gold-miner) 是一个翻译优质互联网技术文章的社区,文章来源为 [掘金](https://juejin.im) 上的英文分享文章。内容覆盖 [Android](https://github.com/xitu/gold-miner#android)、[iOS](https://github.com/xitu/gold-miner#ios)、[前端](https://github.com/xitu/gold-miner#前端)、[后端](https://github.com/xitu/gold-miner#后端)、[区块链](https://github.com/xitu/gold-miner#区块链)、[产品](https://github.com/xitu/gold-miner#产品)、[设计](https://github.com/xitu/gold-miner#设计)、[人工智能](https://github.com/xitu/gold-miner#人工智能)等领域,想要查看更多优质译文请持续关注 [掘金翻译计划](https://github.com/xitu/gold-miner)、[官方微博](http://weibo.com/juejinfanyi)、[知乎专栏](https://zhuanlan.zhihu.com/juejinfanyi)。 From 09bd28119f7b3746639f8a76a809ef71e305e23b Mon Sep 17 00:00:00 2001 From: LeviDing Date: Tue, 8 Jan 2019 22:34:30 +0800 Subject: [PATCH 36/54] Update accepting-payments-with-stripe-vuejs-and-flask.md --- TODO1/accepting-payments-with-stripe-vuejs-and-flask.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/TODO1/accepting-payments-with-stripe-vuejs-and-flask.md b/TODO1/accepting-payments-with-stripe-vuejs-and-flask.md index 3a498702739..57d924122f2 100644 --- a/TODO1/accepting-payments-with-stripe-vuejs-and-flask.md +++ b/TODO1/accepting-payments-with-stripe-vuejs-and-flask.md @@ -90,7 +90,7 @@ $ npm run dev Navigate to [http://localhost:8080](http://localhost:8080). Make sure the basic CRUD functionality works as expected: -![v1 app](/static/images/blog/flask-vue-stripe/v1.gif) +![v1 app](https://testdriven.io/static/images/blog/flask-vue-stripe/v1.gif) > Want to learn how to build this project? Check out the [Developing a Single Page App with Flask and Vue.js](https://testdriven.io/developing-a-single-page-app-with-flask-and-vuejs) blog post. From de9f2e92aa18953ec20553d2edbc1682ddccf60e Mon Sep 17 00:00:00 2001 From: LeviDing Date: Tue, 8 Jan 2019 22:43:07 +0800 Subject: [PATCH 37/54] Create i-worked-with-a-data-scientist-heres-what-i-learned.md --- ...h-a-data-scientist-heres-what-i-learned.md | 95 +++++++++++++++++++ 1 file changed, 95 insertions(+) create mode 100644 TODO1/i-worked-with-a-data-scientist-heres-what-i-learned.md diff --git a/TODO1/i-worked-with-a-data-scientist-heres-what-i-learned.md b/TODO1/i-worked-with-a-data-scientist-heres-what-i-learned.md new file mode 100644 index 00000000000..e80c561821f --- /dev/null +++ b/TODO1/i-worked-with-a-data-scientist-heres-what-i-learned.md @@ -0,0 +1,95 @@ +> * 原文地址:[I Worked With A Data Scientist As A Software Engineer. Here’s My Experience.](https://towardsdatascience.com/i-worked-with-a-data-scientist-heres-what-i-learned-2e19c5f5204) +> * 原文作者:[Ben Daniel A.](https://towardsdatascience.com/@bendaniel10) +> * 译文出自:[掘金翻译计划](https://github.com/xitu/gold-miner) +> * 本文永久链接:[https://github.com/xitu/gold-miner/blob/master/TODO1/i-worked-with-a-data-scientist-heres-what-i-learned.md](https://github.com/xitu/gold-miner/blob/master/TODO1/i-worked-with-a-data-scientist-heres-what-i-learned.md) +> * 译者: +> * 校对者: + +# I Worked With A Data Scientist As A Software Engineer. Here’s My Experience. + +Talking about my experience as a Java/Kotlin developer while working with our data scientist + +![](https://cdn-images-1.medium.com/max/2560/0*V-3j85eeM0dGnd-o) + +Photo by [Daniel Cheung](https://unsplash.com/@danielkcheung?utm_source=medium&utm_medium=referral) on [Unsplash](https://unsplash.com?utm_source=medium&utm_medium=referral) + +#### Background + +In late 2017, I started to develop interest in the Machine Learning field. I [talked about my experience](https://medium.com/@bendaniel10/hello-machine-learning-cc89b3ccbe4d) when I started my journey. In summary, it has been filled with fun challenges and lots of learning. I am an Android Engineer, and this is my experience working on ML projects with our data scientist. + +I remember attempting to solve an image classification problem that came up in one of our apps. We needed to differentiate between valid and invalid images based on a defined set of rules. I immediately modified [this example](https://github.com/deeplearning4j/dl4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/convolution/AnimalsClassification.java) from Deeplearning4J (dl4j) and tried to use it to handle the classification task. I didn’t get the results that I expected, but I remained optimistic. + +![](https://i.loli.net/2019/01/08/5c34b6733de77.png) + +My approach with dl4j sample code was unsuccessful because of the kind of accuracy that I got and the final size of the trained model. This couldn’t fly since we needed a model with a compact file size which is specially important for mobile devices. + +#### Enter the Data Scientist + +![](https://cdn-images-1.medium.com/max/600/0*zKBeymXEf00uZbZZ) + +Photo by [rawpixel](https://unsplash.com/@rawpixel?utm_source=medium&utm_medium=referral) on [Unsplash](https://unsplash.com?utm_source=medium&utm_medium=referral) + +It was around this time that [we](https://seamfix.com/) hired a data scientist, and he came with a lot of relevant experience. I would later learn a lot from him. I had reluctantly started to learn the basics of Python after I found out that most ML problems could be solved with Python. I later discovered that some things were just easier to implement in Python as there’s already a huge support for ML in the Python community. + +We started with small learning sessions. At this point, my other team members became interested and joined the sessions too. He gave us an introduction to [Jupyter Notebooks](https://jupyter.org/install) and the [Cloud Machine Learning Engine](https://cloud.google.com/ml-engine/docs/tensorflow/getting-started-training-prediction). We quickly got our hands dirty by attempting the [image classification using the flower dataset](https://cloud.google.com/ml-engine/docs/tensorflow/flowers-tutorial) example. + +After everyone in the team became grounded with the basics of training and deploying a model, we went straight to the pending tasks. As a team member, I was focused on two tasks at this point: the image classification problem and a segmentation issue. Both of them would later be implemented using Convolutional Neural Networks (CNNs). + +#### Preparing the training data isn’t easy + +![](https://cdn-images-1.medium.com/max/600/0*GllGs9LmPto_7-_U) + +Photo by [Jonny Caspari](https://unsplash.com/@jonnysplsh?utm_source=medium&utm_medium=referral) on [Unsplash](https://unsplash.com?utm_source=medium&utm_medium=referral) + +Both tasks required a lot of training data. The good news was that we had a lot of data. The bad news was that they were unsorted/not annotated. I finally understood what ML experts said about spending the most time preparing the training data rather than training the model itself. + +For the classification task we needed to arrange hundreds of thousands of images into different classes. This was tedious job. I had to invoke my Java Swing skills to build GUIs that made this task easier, but in all, the task was monotonous for everyone involved in the manual classification. + +The segmentation process was a bit more complicated. We were lucky enough to find some models that were good at segmentation already but unfortunately they were too large. We also wanted the model to be able run on Android devices that had very low specs. In a moment of brilliance, the data scientist suggested that we use the huge model to generate the training data that would be used to build our own mobilenet. + +#### Training + +We eventually switched to [AWS Deep Learning AMI](https://docs.aws.amazon.com/dlami/latest/devguide/launch-config.html). We were already comfortable with AWS and it was a plus that they offered such a service. The process of training the model for the image segmentation was fully handled by our data scientist, and I stood beside him, taking notes :). + +![](https://i.loli.net/2019/01/08/5c34b6d806f4f.png) + +Those are not the actual logs, LOL. + +Training this model was a computationally intensive task. This was when I saw the importance of training on a computer with sufficient GPU(s) and RAM. The time it took to train was reasonably short because we used such computers for our training. It would have taken weeks, if not months, had we used a basic computer. + +I handled the training of the image classification model. We didn’t need to train it on the cloud, and in fact, I trained it on my Macbook pro. This was because I was only training the final layer of the neural network compared to the full network training that we did for the segmentation model. + +#### We made it to prod + +Both models made it to our production environment after rigorous tests 🎉. A team member was tasked with building the Java wrapper libraries. This was done so that the models could be used in a way that abstracts all the complexity involved in feeding the model with the images and extracting meaningful results from the tensor of probabilities. This is the array that contains the result of the prediction the model made on a single image. I was also involved a little at this point too as some of the hacky code I had written earlier was cleaned up and reused here. + +#### Challenges, challenges everywhere + +> Challenges are what make life interesting. Overcoming them is what makes them meaningful. — Anonymous + +I can remember when my biggest challenge was working with a 3-dimensional array. I still approach them with caution. Working on ML projects with our data scientist was the encouragement that I needed to continue my ML adventure. + +My biggest challenge when working on these projects was attempting to build, from source, the Tensorflow Java library for 32 bit systems using Bazel. I have not been successful at this. + +![](https://i.loli.net/2019/01/08/5c34b69bf3c36.png) + +I experienced other challenges too, one of them was frequent: translating the Python solutions to Java. Since Python already has built in support for data science tasks, the code felt more concise in Python. I remember pulling my hair out when I tried to literally translate a command: scaling a 2D array and adding it as a transparent layer to an image. We finally got it to work and everyone was excited. + +Now the models on our production environment were doing great mostly, however when they produced a wrong result, those wrong results were ridiculously very wrong. It reminded me of quote I saw on this [excellent post](https://www.oreilly.com/ideas/lessons-learned-turning-machine-learning-models-into-real-products-and-services) about turning ML models into real products and services. + +> …models will actually degrade in quality — and fast — without a constant feed of new data. Known as [concept drift](https://machinelearningmastery.com/gentle-introduction-concept-drift-machine-learning/), this means that the predictions offered by static machine learning models become less accurate, and less useful, as time goes on. In some cases, this can even happen in a matter of days. — [David Talby](https://www.oreilly.com/people/05617-david-talby) + +This means that we will have to keep improving the model, and there is no final model, which is interesting. + +* * * + +I’m not even sure I qualify to be called a ML newbie since I focus mostly on mobile development. I have had an exciting experience this year working with an ML team to ship models that helped solve company problems. It’s something I would want to do again. + +Thanks to [TDS Team](https://medium.com/@TDSteam?source=post_page) and [Alexis McKenzie](https://medium.com/@lexmckenz?source=post_page). + +> 如果发现译文存在错误或其他需要改进的地方,欢迎到 [掘金翻译计划](https://github.com/xitu/gold-miner) 对译文进行修改并 PR,也可获得相应奖励积分。文章开头的 **本文永久链接** 即为本文在 GitHub 上的 MarkDown 链接。 + + +--- + +> [掘金翻译计划](https://github.com/xitu/gold-miner) 是一个翻译优质互联网技术文章的社区,文章来源为 [掘金](https://juejin.im) 上的英文分享文章。内容覆盖 [Android](https://github.com/xitu/gold-miner#android)、[iOS](https://github.com/xitu/gold-miner#ios)、[前端](https://github.com/xitu/gold-miner#前端)、[后端](https://github.com/xitu/gold-miner#后端)、[区块链](https://github.com/xitu/gold-miner#区块链)、[产品](https://github.com/xitu/gold-miner#产品)、[设计](https://github.com/xitu/gold-miner#设计)、[人工智能](https://github.com/xitu/gold-miner#人工智能)等领域,想要查看更多优质译文请持续关注 [掘金翻译计划](https://github.com/xitu/gold-miner)、[官方微博](http://weibo.com/juejinfanyi)、[知乎专栏](https://zhuanlan.zhihu.com/juejinfanyi)。 From 84ae3f96ff095b62226d1424e37cfa79de599c37 Mon Sep 17 00:00:00 2001 From: LeviDing Date: Wed, 9 Jan 2019 13:29:12 +0800 Subject: [PATCH 38/54] Create why-your-app-should-be-optimized-for-screen-of-all-sizes.md --- ...ld-be-optimized-for-screen-of-all-sizes.md | 70 +++++++++++++++++++ 1 file changed, 70 insertions(+) create mode 100644 TODO1/why-your-app-should-be-optimized-for-screen-of-all-sizes.md diff --git a/TODO1/why-your-app-should-be-optimized-for-screen-of-all-sizes.md b/TODO1/why-your-app-should-be-optimized-for-screen-of-all-sizes.md new file mode 100644 index 00000000000..164dc6963b8 --- /dev/null +++ b/TODO1/why-your-app-should-be-optimized-for-screen-of-all-sizes.md @@ -0,0 +1,70 @@ +> * 原文地址:[Why your app should be optimized for screens of all sizes](https://medium.com/googleplaydev/more-than-mobile-friendly-547e44bc085a) +> * 原文作者:[Natalia Gvak](https://medium.com/@nataliagvak) +> * 译文出自:[掘金翻译计划](https://github.com/xitu/gold-miner) +> * 本文永久链接:[https://github.com/xitu/gold-miner/blob/master/TODO1/why-your-app-should-be-optimized-for-screen-of-all-sizes.md](https://github.com/xitu/gold-miner/blob/master/TODO1/why-your-app-should-be-optimized-for-screen-of-all-sizes.md) +> * 译者: +> * 校对者: + +# Why your app should be optimized for screens of all sizes + +See how Gameloft, Evernote, Slack, and 1Password have optimized for Chrome OS + +![](https://cdn-images-1.medium.com/max/1000/1*qstDYCF2lqMH_aQd_81cWA.png) + +Since we launched our first Chromebooks in 2011, the growth of Chrome OS has been incredible. Today, Chromebooks range from traditional laptops to convertibles and tablets that are available in over 10,000 stores — thanks to close partnerships with top OEMs, including Samsung, Dell, and HP, among many others — and we’re only going to keep expanding. It’s been an exciting period of growth for us, but even more so for developers. + +The evolution of Chrome OS presents an amazing opportunity for developers to boost their reach across a wider variety of devices and screens. By optimizing their apps for wider screens on Chrome OS, dev teams can drive higher engagement and reach even more users with immersive experiences. + +### Tapping into a wider appeal for wider screens + +Much of our growth has been fueled by new ways that people consume and engage with content. A lot of people use more than one type of device every day, and the lines between desktop and mobile experiences are getting blurrier. Today, consumers demand versatility. We’re seeing people shift their focus to devices with larger, wider screens that allow them to easily access the content they want, anywhere and anytime. + +Last year, we added our four-in-one, high-performance Chromebook — [Google Pixelbook](https://store.google.com/us/product/google_pixelbook) — to the Chrome OS family. This October, we introduced the first-ever premium tablet made by Google to run Chrome OS: [Google Pixel Slate](https://store.google.com/us/product/pixel_slate?hl=en-US). Along with a rich display and performance that’s ideal for using mobile apps, the Pixel Slate also comes with a detachable keyboard that gives users a familiar laptop feel. + +![](https://cdn-images-1.medium.com/max/800/0*5aXo82iDOfDi9_wX) + +Like other devices powered by Chrome OS, both of these devices combine access to millions of mobile apps with a brilliant, large-screen display. Developers can reach even more users by [adapting their apps for Chrome OS](https://developer.android.com/topic/arc/optimizing) in different ways: + +1. Optimizing designs for wider screens +2. Landscape mode +3. Multi-window management +4. Keyboard, mouse, and stylus input + +### How leading dev teams have optimized for Chrome OS + +#### Gameloft’s Asphalt 8: Airborne + +Asphalt 8: Airborne is a racing game that’s all about extreme speed and complete control. The design team at Gameloft always wants its games to be available on the latest portable hardware, so as soon as the Chromebook hit the market, the team saw a new home for its Asphalt series. + +Because Chrome OS treats a physical keyboard just like an external keyboard on an Android phone, Asphalt 8: Airborne could [support keyboard controls using APIs](https://developer.android.com/topic/arc/input-compatibility) from the [Android Platform SDK 26](https://developer.android.com/studio/releases/platform-tools). This also enabled the UI to automatically switch between touchscreen and keyboard mode. After making the adjustments, Gameloft was able to run Android application packages at even higher performance levels than native apps, allowing it to maintain the series’ breathtaking graphics and breakneck speeds on Chrome OS. Even better, it only took Gameloft’s developers a few days to completely integrate the new control schemes to the game. + +After the optimizations, Asphalt 8 saw a 6X increase in daily active users and a 9X boost in revenue from Chrome users. Now, designing for larger screens is a rule of thumb at Gameloft — the latest edition of the series, Asphalt 9: Legends, is now [available on the Chromebook](https://play.google.com/store/apps/details?id=com.gameloft.android.ANMP.GloftA9HM&hl=en_US). + +#### Evernote and Slack + +One of [Evernote’s](https://developer.android.com/stories/apps/evernote) key features is translating touchscreen handwriting into text, which people tend to use more often on larger screens. To make its app even easier to use across devices and platforms, Evernote’s development team used Google’s low-latency stylus API to quickly implement touchscreen handwriting and enhanced layouts for larger screens. The API allows apps to bypass parts of the OS and draw directly on the display, so Evernote users feel like they’re actually drawing and writing on paper. + +Thanks to its new Chrome OS experience, the average Evernote user is spending 3X more time on larger screen devices and 4X more time when using the Google Pixelbook. + +Meanwhile, the development team at Slack optimized its popular messaging app for Chrome OS by setting up keyboard shortcuts for its most commonly used functions. When users write a message on a Chromebook, they can simply hit the “Enter” key — just like you would on mobile — rather than taking the extra step to click “Send” with their mouse. + +- YouTube 视频链接:https://youtu.be/YlQVNyTDI6Y + +#### 1Password + +1Password worked with the Chrome OS team to drastically improve its user experience in just six weeks. To ensure the app made the best use of [window space at any screen orientation and size](https://developer.android.com/topic/arc/window-management), the development team combined its existing designs for phones and tablets to deliver a responsive layout when users resized the app window. The team also used Chrome OS’s drag-and-drop feature so app users can easily drag content between 1Password and other Android apps on Chrome OS. + +![](https://cdn-images-1.medium.com/max/800/0*GEnxnt_AJrb1rysl) + +Finally, the team enhanced support for keyboard and trackpad input so people can navigate the app without taking their hands off the keyboard. This created a more desktop-like experience on mobile, allowing users to use direction keys and keyboard shortcuts to trigger actions. Since implementing these new improvements, 1Password has seen more than 22.6% more installs on Chrome OS devices. + +### **Deliver the experience your app users demand** + +In a world where consumers increasingly demand versatility, it’s important for developers to expand their strategies beyond mobile and serve users on a variety of devices. It’s crucial to consider whether your app is set up to deliver the most engaging experiences for every user — no matter their device or screen size. Doing so may mean the difference between driving growth and missing out on a plethora of new customers. + +> 如果发现译文存在错误或其他需要改进的地方,欢迎到 [掘金翻译计划](https://github.com/xitu/gold-miner) 对译文进行修改并 PR,也可获得相应奖励积分。文章开头的 **本文永久链接** 即为本文在 GitHub 上的 MarkDown 链接。 + + +--- + +> [掘金翻译计划](https://github.com/xitu/gold-miner) 是一个翻译优质互联网技术文章的社区,文章来源为 [掘金](https://juejin.im) 上的英文分享文章。内容覆盖 [Android](https://github.com/xitu/gold-miner#android)、[iOS](https://github.com/xitu/gold-miner#ios)、[前端](https://github.com/xitu/gold-miner#前端)、[后端](https://github.com/xitu/gold-miner#后端)、[区块链](https://github.com/xitu/gold-miner#区块链)、[产品](https://github.com/xitu/gold-miner#产品)、[设计](https://github.com/xitu/gold-miner#设计)、[人工智能](https://github.com/xitu/gold-miner#人工智能)等领域,想要查看更多优质译文请持续关注 [掘金翻译计划](https://github.com/xitu/gold-miner)、[官方微博](http://weibo.com/juejinfanyi)、[知乎专栏](https://zhuanlan.zhihu.com/juejinfanyi)。 From cdfdaf19ef680d90726a06b89b4ab0f8db20ad9a Mon Sep 17 00:00:00 2001 From: LeviDing Date: Wed, 9 Jan 2019 13:41:36 +0800 Subject: [PATCH 39/54] Create publishing-private-apps-just-got-easier.md --- ...publishing-private-apps-just-got-easier.md | 181 ++++++++++++++++++ 1 file changed, 181 insertions(+) create mode 100644 TODO1/publishing-private-apps-just-got-easier.md diff --git a/TODO1/publishing-private-apps-just-got-easier.md b/TODO1/publishing-private-apps-just-got-easier.md new file mode 100644 index 00000000000..9c38dbe883b --- /dev/null +++ b/TODO1/publishing-private-apps-just-got-easier.md @@ -0,0 +1,181 @@ +> * 原文地址:[Publishing private apps just got easier](https://medium.com/androiddevelopers/publishing-private-apps-just-got-easier-40399c424b8a) +> * 原文作者:[Jon Markoff](https://medium.com/@jmarkoff) +> * 译文出自:[掘金翻译计划](https://github.com/xitu/gold-miner) +> * 本文永久链接:[https://github.com/xitu/gold-miner/blob/master/TODO1/publishing-private-apps-just-got-easier.md](https://github.com/xitu/gold-miner/blob/master/TODO1/publishing-private-apps-just-got-easier.md) +> * 译者: +> * 校对者: + +# Publishing private apps just got easier + +![](https://cdn-images-1.medium.com/max/800/1*pMcEGyuowOHWqtbwVs74-g.png) + +Illustration by [Virginia Poltrack](https://twitter.com/VPoltrack) + +Whether your organization has 5 apps or 100, there are tools available to help automate the process of managing all of the Play Store listings. [Google Play](https://developers.google.com/android-publisher/) has a developer API which enables management of Play store listings, APKs, and more. In January 2017 Google acquired the developer tool suite [Fabric](http://fabric.io/blog/fabric-joins-google/) from Twitter, as part of this acquisition was [_fastlane_](https://fastlane.tools/), a suite of app automation tools. _fastlane_ can automate screenshots, manage beta deployments, and sign/push apps to the Play store. + +Additionally, the [Custom App Publishing API](https://developers.google.com/android/work/play/custom-app-api/get-started) enables Managed Google Play users to create private hosted apps without a [minimum version check](https://developer.android.com/distribute/best-practices/develop/target-sdk). [Managed Google Play](https://support.google.com/googleplay/work/answer/6137711?hl=en) is a marketplace for Android Enterprise that adds support for private apps. [Private apps](https://support.google.com/a/answer/2494992?hl=en) are Android apps that are distributed only to internal users and not publicly available. Private app deployments are available within minutes of creation. A _fastlane_ [pull request](https://github.com/fastlane/fastlane/pull/13421), built by [Jan Piotrowski](https://github.com/janpio), a core contributor was built to allow for a code-free method of deployment. History on the feature request is in this github issue [here](https://github.com/fastlane/fastlane/issues/13122). For more background on Managed Google Play and Google Play Protect, please see this [blog post](https://www.blog.google/products/android-enterprise/safely-and-quickly-distribute-private-enterprise-apps-google-play/). + +**Why this is important:** The Custom App Publishing API or _fastlane_ greatly simplifies and reduces the friction of migrating to Managed Google Play and integrates into continuous integration tools and processes. + +### Setup + +**Important:** Make sure to use the following best practices for [app signing](https://developer.android.com/studio/publish/app-signing) when creating debug and production keystores. Do not lose your production keystore! Once it has been used with an application id on Google Play (including private apps), you cannot change the keystore without creating a new application listing and modifying the application id. + +**Recommended:** Utilize [Google Play App Signing](https://developer.android.com/studio/publish/app-signing#google-play-app-signing) to sign your APKs. This is a safe option to make sure that your keystore will not be lost. Please see the implementation details [here](https://support.google.com/googleplay/android-developer/answer/7384423?hl=en). + +**Important:** All apps (including private apps) on Google Play must have a unique application id and cannot be reused. + +When publishing private apps, there are 3 steps you need to take before this is available. + +Please follow the [Setup Instructions](https://developers.google.com/android/work/play/custom-app-api/get-started) which will guide you through the following steps: + +1. Enable the Google Play Custom App Publishing API in the Cloud API Console +2. Create a service account, download a new private key in JSON format. +3. Enable Private Apps, instructions to follow. + +### fastlane setup + +* Please see this [doc](https://docs.fastlane.tools/getting-started/android/setup/) to install _fastlane._ Managed google play support is included with fastlane. + +### Enable Private Apps — Get the Developer Account Id + +This [guide](https://developers.google.com/android/work/play/custom-app-api/get-started) shows the steps to create private apps which requires creating an OAuth callback to receive the developerAccount id. There are two methods for enabling private apps: using fastlane or using the API. Here’s how to use each and their level of difficulty: + +#### Use fastlane — Easy + +``` +> fastlane run get_managed_play_store_publishing_rights +``` + +**Example Output:** + +``` +[13:20:46]: To obtain publishing rights for custom apps on Managed Play Store, open the following URL and log in: + +[13:20:46]: https://play.google.com/apps/publish/delegatePrivateApp?service_account=SERVICE-ACCOUNT-EMAIL.iam.gserviceaccount.com&continueUrl=https://fastlane.github.io/managed_google_play-callback/callback.html + +[13:20:46]: ([Cmd/Ctrl] + [Left click] lets you open this URL in many consoles/terminals/shells) + +[13:20:46]: After successful login you will be redirected to a page which outputs some information that is required for usage of the `create_app_on_managed_play_store` action. +``` + +Pasting the link into a web browser and authenticating with your account owner of the managed play account will send forward + +#### Use the API — Moderate + +**If** you don’t plan to build a web user interface for managing your apps, you can use this basic node script below and launch with Firebase functions to quickly and easily get the developerAccountId. If you don’t care, you can set the continueUrl to [https://foo.bar](https://foo.bar) (or another fake url) to get the developerAccountId although this is not recommended for security purposes. + +**Cloud Functions for Firebase setup** + +This [guide](https://firebase.google.com/docs/functions/get-started) shows how to set up cloud functions. The following code can be used for the endpoint. + +``` +const functions = require('firebase-functions'); + +exports.oauthcallback = functions.https.onRequest((request, response) => { + response.send(request.query.developerAccount); +}); +``` + +functions/index.js + +### Create Private App Listing + +#### Use fastlane — Easy + +``` + ENV['SUPPLY_JSON_KEY'] = 'key.json' + ENV['SUPPLY_DEVELOPER_ACCOUNT_ID'] = '111111111111000000000' + ENV['SUPPLY_APP_TITLE'] = 'APP TITLE' + desc "Create the private app on the Google Play store" + lane :create_private_app do + gradle( + task: 'assemble', + build_type: 'Release' + ) + + # Finds latest APK + apk_path = Actions.lane_context[SharedValues::GRADLE_APK_OUTPUT_PATH] + + create_app_on_managed_play_store( + json_key: ENV['SUPPLY_JSON_KEY'], + developer_account_id: ENV['SUPPLY_DEVELOPER_ACCOUNT_ID'], + app_title: ENV['SUPPLY_APP_TITLE'], + language: "en_US", + apk: apk_path + ) + end +``` + +Example Fastfile + +``` +> fastlane create_private_app +``` + +#### Use the API — Moderate + +API [documentation](https://developers.google.com/android/work/play/custom-app-api/publish). Client libraries are available in [Java](https://developers.google.com/api-client-library/java/apis/playcustomapp/v1), [Python](https://developers.google.com/api-client-library/python/apis/playcustomapp/v1), [C#](https://developers.google.com/api-client-library/dotnet/apis/playcustomapp/v1), and [Ruby](https://developers.google.com/api-client-library/ruby/apis/playcustomapp/v1). + +#### API Example + +Written in Ruby, this sample code authenticates with a [Google service account](https://developers.google.com/android/work/play/custom-app-api/get-started#create_a_service_account) json keyfile and then calls the Play Custom App Service to create and upload the first version of a private APK. This code is only used for the first time an app is created, and subsequent updates should use the upload apk functionality in the Play Publishing API. + +``` +require "google/apis/playcustomapp_v1" + +# Auth Info +KEYFILE = "KEYFILE.json" # PATH TO JSON KEYFILE +DEVELOPER_ACCOUNT = "DEVELOPER_ACCOUNT_ID" # DEVELOPER ACCOUNT ID + +# App Info +APK_PATH = "FILE_NAME.apk" # PATH TO SIGNED APK WITH V1+V2 SIGNATURES +APP_TITLE = "APP TITLE" +LANGUAGE_CODE = "EN_US" + +scope = "https://www.googleapis.com/auth/androidpublisher" +credentials = JSON.parse(File.open(KEYFILE, "rb").read) +authorization = Signet::OAuth2::Client.new( + :token_credential_uri => "https://oauth2.googleapis.com/token", + :audience => "https://oauth2.googleapis.com/token", + :scope => scope, + :issuer => credentials["client_id"], + :signing_key => OpenSSL::PKey::RSA.new(credentials["private_key"], nil), +) +authorization.fetch_access_token! + +custom_app = Google::Apis::PlaycustomappV1::CustomApp.new title: APP_TITLE, language_code: LANGUAGE_CODE +play_custom_apps = Google::Apis::PlaycustomappV1::PlaycustomappService.new +play_custom_apps.authorization = authorization + +play_custom_apps.create_account_custom_app( + DEVELOPER_ACCOUNT, + custom_app, + upload_source: APK_PATH, +) do |created_app, error| + unless error.nil? + puts "Error: #{error}" + else + puts "Success: #{created_app}." + end +end +``` + +### Updating Private Apps + +Once a private app has been created, the [Google Play Publishing API](https://developers.google.com/android-publisher/) can push new APKs after the initial creation of the Play store listing. _fastlane_ supports this feature to upload new APKs to Play, and more info can be found [here](https://docs.fastlane.tools/getting-started/android/release-deployment/). + +### Deployment to users + +Managed Google Play requires an EMM (Enterprise Mobility Management) system to distribute apps to users. More information [here](https://support.google.com/googleplay/work/answer/6145139?hl=en). + +It has never been easier to deploy and manage your private enterprise apps. Both methods of deploying apps through Managed Google Play are viable, it all comes down to you your CI system and if you want to write any code. Give [fastlane](https://fastlane.tools) a shot, and it should save you tons of time. + +If you run into any issues, bugs can be filed against fastlane on [github](https://github.com/fastlane/fastlane/issues). + +> 如果发现译文存在错误或其他需要改进的地方,欢迎到 [掘金翻译计划](https://github.com/xitu/gold-miner) 对译文进行修改并 PR,也可获得相应奖励积分。文章开头的 **本文永久链接** 即为本文在 GitHub 上的 MarkDown 链接。 + + +--- + +> [掘金翻译计划](https://github.com/xitu/gold-miner) 是一个翻译优质互联网技术文章的社区,文章来源为 [掘金](https://juejin.im) 上的英文分享文章。内容覆盖 [Android](https://github.com/xitu/gold-miner#android)、[iOS](https://github.com/xitu/gold-miner#ios)、[前端](https://github.com/xitu/gold-miner#前端)、[后端](https://github.com/xitu/gold-miner#后端)、[区块链](https://github.com/xitu/gold-miner#区块链)、[产品](https://github.com/xitu/gold-miner#产品)、[设计](https://github.com/xitu/gold-miner#设计)、[人工智能](https://github.com/xitu/gold-miner#人工智能)等领域,想要查看更多优质译文请持续关注 [掘金翻译计划](https://github.com/xitu/gold-miner)、[官方微博](http://weibo.com/juejinfanyi)、[知乎专栏](https://zhuanlan.zhihu.com/juejinfanyi)。 From 9709477b62aa9f175385dc211800be281e675ca6 Mon Sep 17 00:00:00 2001 From: LeviDing Date: Wed, 9 Jan 2019 13:56:49 +0800 Subject: [PATCH 40/54] Create asynchronous-tasks-with-flask-and-redis-queue.md --- ...ronous-tasks-with-flask-and-redis-queue.md | 446 ++++++++++++++++++ 1 file changed, 446 insertions(+) create mode 100644 TODO1/asynchronous-tasks-with-flask-and-redis-queue.md diff --git a/TODO1/asynchronous-tasks-with-flask-and-redis-queue.md b/TODO1/asynchronous-tasks-with-flask-and-redis-queue.md new file mode 100644 index 00000000000..7588c14d156 --- /dev/null +++ b/TODO1/asynchronous-tasks-with-flask-and-redis-queue.md @@ -0,0 +1,446 @@ +> * 原文地址:[Asynchronous Tasks with Flask and Redis Queue](https://testdriven.io/blog/asynchronous-tasks-with-flask-and-redis-queue/) +> * 原文作者:[Michael Herman](https://testdriven.io/authors/herman/) +> * 译文出自:[掘金翻译计划](https://github.com/xitu/gold-miner) +> * 本文永久链接:[https://github.com/xitu/gold-miner/blob/master/TODO1/asynchronous-tasks-with-flask-and-redis-queue.md](https://github.com/xitu/gold-miner/blob/master/TODO1/asynchronous-tasks-with-flask-and-redis-queue.md) +> * 译者: +> * 校对者: + +# Asynchronous Tasks with Flask and Redis Queue + +![](https://testdriven.io/static/images/blog/flask-rq/aysnc_python_redis.png) + +If a long-running task is part of your application's workflow you should handle it in the background, outside the normal flow. + +Perhaps your web application requires users to submit a thumbnail (which will probably need to be re-sized) and confirm their email when they register. If your application processed the image and sent a confirmation email directly in the request handler, then the end user would have to wait for them both to finish. Instead, you'll want to pass these tasks off to a task queue and let a separate worker process deal with it, so you can immediately send a response back to the client. The end user can do other things on the client-side and your application is free to respond to requests from other users. + +This post looks at how to configure [Redis Queue](http://python-rq.org/) (RQ) to handle long-running tasks in a Flask app. + +> Celery is a viable solution as well. It's quite a bit more complex and brings in more dependencies than Redis Queue, though. + +## Contents + +- [Asynchronous Tasks with Flask and Redis Queue](#asynchronous-tasks-with-flask-and-redis-queue) + - [Contents](#contents) + - [Objectives](#objectives) + - [Workflow](#workflow) + - [Project Setup](#project-setup) + - [Trigger a Task](#trigger-a-task) + - [Redis Queue](#redis-queue) + - [Task Status](#task-status) + - [Dashboard](#dashboard) + - [Conclusion](#conclusion) + +## Objectives + +By the end of this post you should be able to: + +1. Integrate Redis Queue into a Flask app and create tasks. +2. Containerize Flask and Redis with Docker. +3. Run long-running tasks in the background with a separate worker process. +4. Set up [RQ Dashboard](https://github.com/eoranged/rq-dashboard) to monitor queues, jobs, and workers. +5. Scale the worker count with Docker. + +## Workflow + +Our goal is to develop a Flask application that works in conjunction with Redis Queue to handle long-running processes outside the normal request/response cycle. + +1. The end user kicks off a new task via a POST request to the server-side +2. Within the view, a task is added to the queue and the task id is sent back to the client-side +3. Using AJAX, the client continues to poll the server to check the status of the task while the task itself is being ran in the background + +![flask and redis queue user flow](https://testdriven.io/static/images/blog/flask-rq/flask-rq-flow.png) + +In the end, the app will look like this: + +![final app](https://testdriven.io/static/images/blog/flask-rq/app.gif) + +## Project Setup + +Want to follow along? Clone down the base project, and then review the code and project structure: + +``` +$ git clone https://github.com/mjhea0/flask-redis-queue --branch base --single-branch +$ cd flask-redis-queue +``` + +Since we'll need to manage three processes in total (Flask, Redis, worker), we'll use Docker to simplify our workflow by wiring them altogether to run in one terminal window. + +To test, run: + +``` +$ docker-compose up -d --build +``` + +Open your browser to [http://localhost:5004](http://localhost:5004). You should see: + +![flask, redis queue, docker](https://testdriven.io/static/images/blog/flask-rq/flask_redis_queue.png) + +## Trigger a Task + +An event handler in _project/client/static/main.js_ is set up that listens for a button click and sends an AJAX POST request to the server with the appropriate task type - `1`, `2`, or `3`. + +``` +$('.btn').on('click', function() { + $.ajax({ + url: '/tasks', + data: { type: $(this).data('type') }, + method: 'POST' + }) + .done((res) => { + getStatus(res.data.task_id) + }) + .fail((err) => { + console.log(err) + }); +}); +``` + +On the server-side, a view is already configured to handle the request in _project/server/main/views.py_: + +``` +@main_blueprint.route('/tasks', methods=['POST']) +def run_task(): + task_type = request.form['type'] + return jsonify(task_type), 202 +``` + +We just need to wire up Redis Queue. + +## Redis Queue + +So, we need to spin up two new processes - Redis and a worker. Add them to the _docker-compose.yml_ file: + +``` +version: '3.7' + +services: + + web: + build: . + image: web + container_name: web + ports: + - '5004:5000' + command: python manage.py run -h 0.0.0.0 + volumes: + - .:/usr/src/app + environment: + - FLASK_DEBUG=1 + - APP_SETTINGS=project.server.config.DevelopmentConfig + depends_on: + - redis + + worker: + image: web + command: python manage.py run_worker + volumes: + - .:/usr/src/app + environment: + - APP_SETTINGS=project.server.config.DevelopmentConfig + depends_on: + - redis + + redis: + image: redis:4.0.11-alpine +``` + +Add the task to a new file called _tasks.py_ in "project/server/main": + +``` +# project/server/main/tasks.py + +import time + +def create_task(task_type): + time.sleep(int(task_type) * 10) + return True +``` + +Update the view to connect to Redis, enqueue the task, and respond with the id: + +``` +@main_blueprint.route('/tasks', methods=['POST']) +def run_task(): + task_type = request.form['type'] + with Connection(redis.from_url(current_app.config['REDIS_URL'])): + q = Queue() + task = q.enqueue(create_task, task_type) + response_object = { + 'status': 'success', + 'data': { + 'task_id': task.get_id() + } + } + return jsonify(response_object), 202 +``` + +Don't forget the imports: + +``` +import redis +from rq import Queue, Connection +from flask import render_template, Blueprint, jsonify, \ + request, current_app + +from project.server.main.tasks import create_task +``` + +Update `BaseConfig`: + +``` +class BaseConfig(object): + """Base configuration.""" + WTF_CSRF_ENABLED = True + REDIS_URL = 'redis://redis:6379/0' + QUEUES = ['default'] +``` + +Did you notice that we referenced the `redis` service (from _docker-compose.yml_) in the `REDIS_URL` rather than `localhost` or some other IP? Review the Docker Compose [docs](https://docs.docker.com/compose/networking/) for more info on connecting to other services via the hostname. + +Finally, we can use a Redis Queue [worker](http://python-rq.org/docs/workers/), to process tasks at the top of the queue. + +``` +@cli.command('run_worker') +def run_worker(): + redis_url = app.config['REDIS_URL'] + redis_connection = redis.from_url(redis_url) + with Connection(redis_connection): + worker = Worker(app.config['QUEUES']) + worker.work() +``` + +Here, we set up a custom CLI command to fire the worker. + +It's important to note that the `@cli.command()` decorator will provide access to the application context along with the associated config variables from _project/server/config.py_ when the command is executed. + +Add the imports as well: + +``` +import redis +from rq import Connection, Worker +``` + +Add the dependencies to the requirements file: + +``` +redis==2.10.6 +rq==0.12.0 +``` + +Build and spin up the new containers: + +``` +$ docker-compose up -d --build +``` + +To trigger a new task, run: + +``` +$ curl -F type=0 http://localhost:5004/tasks +``` + +You should see something like: + +``` +{ + "data": { + "task_id": "bdad64d0-3865-430e-9cc3-ec1410ddb0fd" + }, + "status": "success" +} +Ta +``` + +## Task Status + +Turn back to the event handler on the client-side: + +``` +$('.btn').on('click', function() { + $.ajax({ + url: '/tasks', + data: { type: $(this).data('type') }, + method: 'POST' + }) + .done((res) => { + getStatus(res.data.task_id) + }) + .fail((err) => { + console.log(err) + }); +}); +``` + +Once the response comes back from the original AJAX request, we then continue to call `getStatus()` with the task id every second. If the response is successful, a new row is added to the table on the DOM. + +``` +function getStatus(taskID) { + $.ajax({ + url: `/tasks/${taskID}`, + method: 'GET' + }) + .done((res) => { + const html = ` + + ${res.data.task_id} + ${res.data.task_status} + ${res.data.task_result} + ` + $('#tasks').prepend(html); + const taskStatus = res.data.task_status; + if (taskStatus === 'finished' || taskStatus === 'failed') return false; + setTimeout(function() { + getStatus(res.data.task_id); + }, 1000); + }) + .fail((err) => { + console.log(err); + }); +} +``` + +Update the view: + +``` +@main_blueprint.route('/tasks/', methods=['GET']) +def get_status(task_id): + with Connection(redis.from_url(current_app.config['REDIS_URL'])): + q = Queue() + task = q.fetch_job(task_id) + if task: + response_object = { + 'status': 'success', + 'data': { + 'task_id': task.get_id(), + 'task_status': task.get_status(), + 'task_result': task.result, + } + } + else: + response_object = {'status': 'error'} + return jsonify(response_object) +``` + +Add a new task to the queue: + +``` +$ curl -F type=1 http://localhost:5004/tasks +``` + +Then, grab the `task_id` from the response and call the updated endpoint to view the status: + +``` +$ curl http://localhost:5004/tasks/5819789f-ebd7-4e67-afc3-5621c28acf02 + +{ + "data": { + "task_id": "5819789f-ebd7-4e67-afc3-5621c28acf02", + "task_result": true, + "task_status": "finished" + }, + "status": "success" +} +``` + +Test it out in the browser as well: + +![flask, redis queue, docker](https://testdriven.io/static/images/blog/flask-rq/flask_redis_queue_updated.png) + +## Dashboard + +[RQ Dashboard](https://github.com/eoranged/rq-dashboard) is a lightweight, web-based monitoring system for Redis Queue. + +To set up, first add a new directory to the "project" directory called "dashboard". Then, add a new _Dockerfile_ to that newly created directory: + +``` +FROM python:3.7.0-alpine + +RUN pip install rq-dashboard + +EXPOSE 9181 + +CMD ["rq-dashboard"] +``` + +Simply add the service to the _docker-compose.yml_ file like so: + +``` +version: '3.7' + +services: + + web: + build: . + image: web + container_name: web + ports: + - '5004:5000' + command: python manage.py run -h 0.0.0.0 + volumes: + - .:/usr/src/app + environment: + - FLASK_DEBUG=1 + - APP_SETTINGS=project.server.config.DevelopmentConfig + depends_on: + - redis + + worker: + image: web + command: python manage.py run_worker + volumes: + - .:/usr/src/app + environment: + - APP_SETTINGS=project.server.config.DevelopmentConfig + depends_on: + - redis + + redis: + image: redis:4.0.11-alpine + + dashboard: + build: ./project/dashboard + image: dashboard + container_name: dashboard + ports: + - '9181:9181' + command: rq-dashboard -H redis +``` + +Build the image and spin up the container: + +``` +$ docker-compose up -d --build +``` + +Navigate to [http://localhost:9181](http://localhost:9181) to view the dashboard: + +![rq dashboard](https://testdriven.io/static/images/blog/flask-rq/rq_dashboard.png) + +Kick off a few jobs to fully test the dashboard: + +![rq dashboard](https://testdriven.io/static/images/blog/flask-rq/rq_dashboard_in_action.png) + +Try adding a few more workers to see how that affects things: + +``` +$ docker-compose up -d --build --scale worker=3 +``` + +## Conclusion + +This has been a basic guide on how to configure Redis Queue to run long-running tasks in a Flask app. You should let the queue handle any processes that could block or slow down the user-facing code. + +Looking for some challenges? + +1. Spin up [Digital Ocean](https://m.do.co/c/d8f211a4b4c2) and deploy this application across a number of droplets using Docker Swarm. +2. Write unit tests for the new endpoints. (Mock out the Redis instance with [fakeredis](https://github.com/jamesls/fakeredis)) +3. Instead of polling the server, try using [Flask-SocketIO](https://flask-socketio.readthedocs.io) to open up a websocket connection. + +Grab the code from the [repo](https://github.com/mjhea0/flask-redis-queue). + +> 如果发现译文存在错误或其他需要改进的地方,欢迎到 [掘金翻译计划](https://github.com/xitu/gold-miner) 对译文进行修改并 PR,也可获得相应奖励积分。文章开头的 **本文永久链接** 即为本文在 GitHub 上的 MarkDown 链接。 + + +--- + +> [掘金翻译计划](https://github.com/xitu/gold-miner) 是一个翻译优质互联网技术文章的社区,文章来源为 [掘金](https://juejin.im) 上的英文分享文章。内容覆盖 [Android](https://github.com/xitu/gold-miner#android)、[iOS](https://github.com/xitu/gold-miner#ios)、[前端](https://github.com/xitu/gold-miner#前端)、[后端](https://github.com/xitu/gold-miner#后端)、[区块链](https://github.com/xitu/gold-miner#区块链)、[产品](https://github.com/xitu/gold-miner#产品)、[设计](https://github.com/xitu/gold-miner#设计)、[人工智能](https://github.com/xitu/gold-miner#人工智能)等领域,想要查看更多优质译文请持续关注 [掘金翻译计划](https://github.com/xitu/gold-miner)、[官方微博](http://weibo.com/juejinfanyi)、[知乎专栏](https://zhuanlan.zhihu.com/juejinfanyi)。 From 459e660d74fcec671d7a42f20786a42b987786a4 Mon Sep 17 00:00:00 2001 From: LeviDing Date: Wed, 9 Jan 2019 14:16:44 +0800 Subject: [PATCH 41/54] Create dependencies-ios-carthage.md --- TODO1/dependencies-ios-carthage.md | 133 +++++++++++++++++++++++++++++ 1 file changed, 133 insertions(+) create mode 100644 TODO1/dependencies-ios-carthage.md diff --git a/TODO1/dependencies-ios-carthage.md b/TODO1/dependencies-ios-carthage.md new file mode 100644 index 00000000000..234ebc55648 --- /dev/null +++ b/TODO1/dependencies-ios-carthage.md @@ -0,0 +1,133 @@ +> * 原文地址:[Building Dependencies on iOS with Carthage](https://appunite.com/blog/dependencies-ios-carthage) +> * 原文作者:[Szymon Mrozek](https://appunite.com/blog/author/szymon-mrozek) +> * 译文出自:[掘金翻译计划](https://github.com/xitu/gold-miner) +> * 本文永久链接:[https://github.com/xitu/gold-miner/blob/master/TODO1/dependencies-ios-carthage.md](https://github.com/xitu/gold-miner/blob/master/TODO1/dependencies-ios-carthage.md) +> * 译者: +> * 校对者: + +# Building Dependencies on iOS with Carthage + +## Lovely Carthage + +In this article I want to share my experience with building dependencies by using Carthage. First of all, Carthage shines with simplicity. It’s **very** simple to start using some external dependency in a Xcode project just by adding proper line into `Cartfile` and running `carthage update`. But as we all know, life is brutal and sometimes we need to consider more complex examples. + +Let’s assume there is a team of iOS developers. Tony, John and Keith are working iOS application with **~15** popular dependencies like Alamofire, Kingfisher, ReactiveCocoa etc… + +### What problems they might meet? + +* **Different compiler** - some of libraries are written in Swift, which means each different compilator runtime is incompatible with the others. This might be a huge problem if those developers use different versions of Xcode. Each of them need to build his own versions of frameworks or use the same version of Xcode. +* **Clean build time** - this is hot topic recently, sometimes we need to care about build time, especially on CI and while switching between branches. Team decided that they don’t want to spend a lot of time like 1 hour waiting for release to be built so this issue might be critical. +* **Repository size** - some of developers prefer to include compiled frameworks in repository. Team is using free github plan, so their maximum repository size is 1GB. Storing frameworks in repo can lead to huge increase of its size, even around 5GB. Even if repo storage limit would not be a problem cloning such repository would take a **lot of time**. This might have huge influence for a clean build time especially when using CI with virtual machines. +* **Updating frameworks** - without some extra work carthage recompiles **all** frameworks when you run `carthage update` or one framework if you run it for single dependency. At the begining of the project we do that very often. Team is looking for a solution to speed this up too. + +_There is no free lunch…_ I agree, but at the same time I believe sometimes it’s worth to spend some time for improving your everyday tools. I’ve spend **a lot** of time experimenting with dependency managers, caching their artifacts etc… Let me tell you about three popular solutions of maintaining carthage frameworks. + +**Before you begin** + +* If you’re not familiar with Carthage please take a look at it’s [repository](https://github.com/Carthage/Carthage) first. +* I won’t consider storing Carthage frameworks directly in repository. + +## Naive approach + +Let the story begin … Tony is a team leader and he decided to use Carthage as a dependency manager. He defined some rules for other developers when working with external frameworks: + +* Add Carthage/Build to `.gitignore` and include `Carthage/Checkouts` in repository, +* When cloning repository for the first time - you need to run `carthage bootstrap` (rebuild all dependencies). CI would need to run that for each pipeline, +* When updating framework please only update one framework like `carthage update ReactiveSwift`. + +Those are very simple rules, but what about their pros and cons? + +### Pros: + +* Free (costs `0$` per month) +* Repository size would never increase dramatically + +### Cons: + +* Very long clean builds +* Absolutely no reuse of pre-compiled frameworks +* Others’ code in your repository + +Let’s compare this solution to problems that might occur: + +![Naive approach](https://www.dropbox.com/s/ua43u6h5k5p094a/lfs-table.png?raw=1) + +To sum up: their biggest problem in this approach is **time**. The only fully solved problem is repository size. CI build time would be very long and would increase proportionally with number of dependencies. As you can see there is still a lot to improve. Let’s try something different… + +## Git LFS + +Some day one of the developers - John - found that github allows storing large files in their LFS (large file system). He noticed that this might be great oportunity to start including pre-compiled frameworks in git repo, but still keep it small. He modified Tony’s rules a little: + +* Add **both** `Carthage/Build` **and** `Carthage/Checkouts` to `.gitignore`, +* When cloning repository for the first time - you **don’t** need to run `carthage bootstrap` (rebuild all dependencies), but you need to extract frameworks from LFS, +* When updating framework please only update one framework like `carthage update ReactiveSwift`, **some extra work is needed** \- you need to archive those frameworks, zip them and upload to git-lfs (add to `.gitattributes`), +* **All team members** must have the same Swift compiler version (Xcode version). + +This solution is much more complicated especially because of extra steps with zipping and uploading frameworks. There is a [great article](https://medium.com/@rajatvig/speeding-up-carthage-for-ios-applications-50e8d0a197e1) that describes this and offers some simple `Makefile` to automate this step. + +### Pros: + +* Repository size still not growing +* After cloning and extracting you’re ready to go + +### Cons: + +* In most cases not free (costs `5$` per month after reaching 1GB on LFS) +* Each developer must work with the same Xcode version +* No mechanism for speeding up update of frameworks + +Let’s compare this solution to problems defined at the begining of the article: + +![LFS approach](https://www.dropbox.com/s/wddhmpli1yyiqgv/naive-table.png?raw=1) + +After all I think that this looks much better! Having fast clean builds is much more important for most teams than possibility to use different Xcodes between developers. They are still able to have differen versions installed and only switch between them for specific projects. I believe `5$` per month for LFS is not a big deal. So it’s much a better (and difficult) solution, but there is still some room for improvement … + +## Rome + +So, time for Keith to show up. He appreciate other developers’ research, but Keith cares a lot about team work. He thought that maybe it’s possible to share different versions of pre-compiled frameworks compiled by different versions of swift compiler between different projects? That’s a lot of variety, but fortunately there is a tool for that! It’s called `Rome`. I highly encourage you to take a look at documentation on [github](https://github.com/blender/Rome). In general this tool shares frameworks using Amazon S3 Bucket. Again, Keith changed the rules: + +* Add **both** `Carthage/Build` **and** `Carthage/Checkouts` to `.gitignore`, +* When cloning repository for the first time - you **don’t** need to run `carthage bootstrap` (rebuild all dependencies) but you need download them from Amazon S3, +* When updating framework please only update one framework **version** like `carthage update ReactiveSwift --no-build` and then try to download it from Amazon and if it does not exist build it and upload, +* You need to define `RepositoryMap` which tells Rome which dependencies compiled by Carthage you use. + +By using some **very simple** helper script those rules seem to be almost as simple as the one from `Naive approach` section. I’m very impressed by this tool especially by the relation between amount of required setup work and given benefits. Let’s see what are pros and cons of this solution: + +### Pros: + +* Repository size still not growing +* After cloning and downloading you’re ready to go +* Share frameworks between all company developers (very simple framework update because someone possibly already compiled proper version for you) +* Feel free to use different versions of Xcode +* Better knowlage of dependencies that you use because of `RepositoryMap` +* Ability to schedule building dependencies on CI and then using them locally + +### Cons: + +* Not free, but it’s still cheaper than **LFS** (`$0.023 / GB`) + +And comparison with an obvious result: + +![Rome approach](https://www.dropbox.com/s/9ffe5v1gxkvo7nx/rome-table.png?raw=1) + +In my opinion this solution is the one that saves you a lot of hours spent on dependency management. Of course sometimes you’ll need to build on your machine / CI but you have to guarantee that this job will be reused. + +## Recap + +So you already noticed that I believe Rome is the best solution for now and I highly encourage you to use this, but the story shows that there is always something we can improve. You should experiment with different approaches and pick the one that solves your problems. I believe that during reading a story of Tony, John and Keith, you noticed more than just the best friend of Carthage (Rome). It’s about team work and improving team workflow. Those guys tried all the time to solve the problem of working together (with CI as a virtual fourth team member) and finally one of them found a solution that fits ideally to their needs! + +### Useful links: + +* [Carthage github](https://github.com/Carthage/Carthage) +* [Git LFS](https://git-lfs.github.com) +* [Medium article about Carthage + LFS](https://medium.com/@rajatvig/speeding-up-carthage-for-ios-applications-50e8d0a197e1) +* [BFG - tool for migrating to LFS](https://github.com/rtyley/bfg-repo-cleaner/releases/tag/v1.12.5) +* [Rome github](https://github.com/blender/Rome) +* [AWS credentials](https://aws.amazon.com/blogs/security/a-new-and-standardized-way-to-manage-credentials-in-the-aws-sdks) + +> 如果发现译文存在错误或其他需要改进的地方,欢迎到 [掘金翻译计划](https://github.com/xitu/gold-miner) 对译文进行修改并 PR,也可获得相应奖励积分。文章开头的 **本文永久链接** 即为本文在 GitHub 上的 MarkDown 链接。 + + +--- + +> [掘金翻译计划](https://github.com/xitu/gold-miner) 是一个翻译优质互联网技术文章的社区,文章来源为 [掘金](https://juejin.im) 上的英文分享文章。内容覆盖 [Android](https://github.com/xitu/gold-miner#android)、[iOS](https://github.com/xitu/gold-miner#ios)、[前端](https://github.com/xitu/gold-miner#前端)、[后端](https://github.com/xitu/gold-miner#后端)、[区块链](https://github.com/xitu/gold-miner#区块链)、[产品](https://github.com/xitu/gold-miner#产品)、[设计](https://github.com/xitu/gold-miner#设计)、[人工智能](https://github.com/xitu/gold-miner#人工智能)等领域,想要查看更多优质译文请持续关注 [掘金翻译计划](https://github.com/xitu/gold-miner)、[官方微博](http://weibo.com/juejinfanyi)、[知乎专栏](https://zhuanlan.zhihu.com/juejinfanyi)。 From b2e185d140f9797c52a14bac7154e2aabb46809d Mon Sep 17 00:00:00 2001 From: xilihuasi <2857818553@qq.com> Date: Wed, 9 Jan 2019 17:31:49 +0800 Subject: [PATCH 42/54] =?UTF-8?q?CSS=20Shapes=20=E7=AE=80=E4=BB=8B=20(#495?= =?UTF-8?q?7)?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * Update an-introduction-to-css-shapes.md initial commit * Update an-introduction-to-css-shapes.md * Update an-introduction-to-css-shapes.md * Update an-introduction-to-css-shapes.md * Update an-introduction-to-css-shapes.md finish translate * Update an-introduction-to-css-shapes.md 根据校对者意见修改 * Update an-introduction-to-css-shapes.md --- TODO1/an-introduction-to-css-shapes.md | 135 +++++++++++++------------ 1 file changed, 68 insertions(+), 67 deletions(-) diff --git a/TODO1/an-introduction-to-css-shapes.md b/TODO1/an-introduction-to-css-shapes.md index 6117c53d646..c80bd01dc99 100644 --- a/TODO1/an-introduction-to-css-shapes.md +++ b/TODO1/an-introduction-to-css-shapes.md @@ -2,56 +2,56 @@ > * 原文作者:[Tania Rascia](https://tympanus.net/codrops/author/taniarascia/) > * 译文出自:[掘金翻译计划](https://github.com/xitu/gold-miner) > * 本文永久链接:[https://github.com/xitu/gold-miner/blob/master/TODO1/an-introduction-to-css-shapes.md](https://github.com/xitu/gold-miner/blob/master/TODO1/an-introduction-to-css-shapes.md) -> * 译者: -> * 校对者: +> * 译者:[xilihuasi](https://github.com/xilihuasi) +> * 校对者:[ElizurHz](https://github.com/ElizurHz), [Moonliujk](https://github.com/Moonliujk) -# An Introduction to CSS Shapes +# CSS Shapes 简介 -CSS Shapes allow us to make interesting and unique layouts by defining geometric shapes, images, and gradients that text content can flow around. Learn how to use them in this tutorial. +CSS Shapes 允许我们通过定义文本内容可以环绕的几何形状、图像和渐变,来创建有趣且独特的布局。本次教程会教你如何使用它们。 ![cssshapes_featured](https://codropspz-tympanus.netdna-ssl.com/codrops/wp-content/uploads/2018/11/cssshapes_featured-1.jpg) -Until the introduction of [CSS Shapes](https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_Shapes), it was nearly impossible to design a magazine-esque layout with free flowing text for the web. On the contrary, web design layouts have traditionally been shaped with grids, boxes, and straight lines. +在 [CSS Shapes](https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_Shapes) 问世之前,为网页设计文本自由环绕的杂志式布局几乎是不可能的。相反,网页设计布局传统上一直用网格、盒子和直线构造。 -CSS Shapes allow us to define geometric shapes that text can flow around. These shapes can be circles, ellipses, simple or complex polygons, and even images and gradients. A few practical design applications of Shapes might be displaying circular text around a circular avatar, displaying text over the simple part of a full-width background image, and displaying text flowing around drop caps in an article. +CSS Shapes 允许我们定义文本环绕的几何形状。这些形状可以是圆、椭圆、简单或复杂的多边形,甚至图像和渐变。Shapes 的一些实际设计应用可能是圆形头像周围显示圆形环绕文本,全屏背景图片的简单部位上面展示文本,以及在文章中显示首字下沉。 -Now that CSS Shapes have gained widespread support across modern browsers, it’s worth taking a look into the flexibility and functionality they provide to see if they might make sense in your next design project. +现在 CSS Shapes 已经获得了现代浏览器的广泛支持,值得一看的是它们提供的灵活性和功能,以确定它们在你的下一个设计项目中是否有意义。 -> **Attention**: At the time of writing this article, [CSS Shapes](https://caniuse.com/#feat=css-shapes) have support in Firefox, Chrome, Safari, and Opera, as well as mobile browsers such as iOS Safari and Chrome for Android. Shapes do not have IE support, and are [under consideration](https://developer.microsoft.com/en-us/microsoft-edge/platform/status/shapes/) for Microsoft Edge. +> **注意**:截至攥写本文时,[CSS Shapes](https://caniuse.com/#feat=css-shapes) 支持 Firefox、Chrome、Safari 和 Opera,以及 iOS Safari 和 Chrome for Android 等移动浏览器。Shapes 不支持 IE,对 Microsoft Edge 的支持[正在考虑中](https://developer.microsoft.com/en-us/microsoft-edge/platform/status/shapes/)。 -## First Look at CSS Shapes +## CSS Shapes 初探 -The current implementation of CSS Shapes is [CSS Shapes Module Level 1](https://drafts.csswg.org/css-shapes/), which mostly revolves around the `[shape-outside](https://tympanus.net/codrops/css_reference/shape-outside/)` property. `shape-outside` defines a shape that text can flow around. +CSS Shapes 的当前实现是 [CSS Shapes Module Level 1](https://drafts.csswg.org/css-shapes/),它主要包含 `[shape-outside](https://tympanus.net/codrops/css_reference/shape-outside/)` 属性。`shape-outside` 定义了文本环绕的形状。 -Considering there is a `shape-outside` property, you might assume there is a corresponding `shape-inside` property that would contain text within a shape. A `shape-inside` property might become a reality in the future, but it is currently a draft in [CSS Shapes Module Level 2](https://drafts.csswg.org/css-shapes-2/), and is not implemented by any browser. +考虑到有 `shape-outside` 属性,你可能会想到还有一个相应的 `shape-inside` 属性,它包含形状内的文本。`shape-inside` 属性可能会在将来实现,目前它只是 [CSS Shapes Module Level 2](https://drafts.csswg.org/css-shapes-2/)里面的一个草案,并没有被任何浏览器实现。 -In this article, we’re going to demonstrate how to use the [](https://tympanus.net/codrops/css_reference/basic-shape/) data type and set it with shape function values, as well as setting a shape using a semi-transparent URL or gradient. +在本文中,我们将演示如何使用 [](https://tympanus.net/codrops/css_reference/basic-shape/) 数据类型,并使用形状函数值设置它,以及使用半透明 URL 或渐变设置形状。 -## Basic Shape Functions +## 基本的形状函数 -We can define all sorts of Basic Shapes in CSS by applying the following function values to the `shape-outside` property: +我们可以通过将下列函数值应用于 `shape-outside` 属性来定义 CSS 中的各种基本形状: * `circle()` * `ellipse()` * `inset()` * `polygon()` -In order to apply the `shape-outside` property to an element, the element must be floated, and have a defined height and width. Let’s go through each of the four basic shapes and demonstrate how they can be used. +要给元素设定 `shape-outside` 属性,该元素必须是浮动的并且已设定宽高。让我们逐个来看四个基本形状,并演示它们的使用方法。 -### Circle +### 圆 -We’ll start with the `circle()` function. Imagine a situation in which we have a circular avatar of an author that we want to float left, and we want the author’s description text to flow around it. Simply using a `border-radius: 50%` on the avatar element won’t be enough to get the text to make a circular shape; the text will still treat the avatar as a rectangular element. +我们将从 `circle()` 函数开始。设想如下场景,有一个圆形的作者头像,我们想让头像左浮动并且作者的描述文本环绕它。仅对头像元素使用 `border-radius: 50%` 不足以使文本呈圆形;文本仍将把头像当成矩形元素。 -With the circle shape, we can demonstrate how text can flow around a circle. +通过圆形,我们可以演示文本如何按圆形环绕。 -We’ll start by creating a `circle` class on a regular `div`, and making some paragraphs. (I used Bob Ross quotes as Lorem Ipsum text.) +首先我们在一个普通的 `div` 上创建 `circle` 样式,并且写几段文字。(我使用 Bob Ross 语录作为 Lorem Ipsum 文本。) ```

Example text...

``` -In our `circle` class, we float the element left, give it an equal `height` and `width`, and set the `shape-outside` to `circle()`. +在 `circle` 样式中,我们设置元素左浮动,设定等值的 `height` 和 `width`,并且设置 `shape-outside` 为 `circle()`。 ``` .circle { @@ -62,15 +62,15 @@ In our `circle` class, we float the element left, give it an equal `height` and } ``` -If we view the page, it will look like this. +如果我们访问页面,会看到如下场景。 ![](https://codropspz-tympanus.netdna-ssl.com/codrops/wp-content/uploads/2018/11/cssshapes_circle1.jpg) -As you can see, the text flows around the circle shape, but we don’t actually see any shape. Hovering over the element in Developer Tools will show us the actual shape that is being set. +如你所见,文本围绕圆形环绕,但是我们并没有看到任何形状。使用开发工具审查元素,我们可以看到已经设置好的实际形状。 ![](https://codropspz-tympanus.netdna-ssl.com/codrops/wp-content/uploads/2018/11/cssshapes_circle2.jpg) -At this point, you might assume that you can set a `background` color or image to the element, and you’ll see the shape. Let’s try that out. +此时,你可能会认为,给元素 `background` 设置颜色或者图片就能看到形状了。我们来试一下。 ``` .circle { @@ -82,11 +82,11 @@ At this point, you might assume that you can set a `background` color or image t } ``` -Frustratingly, setting a `background` to the `circle` just gives us a rectangle, the very thing we’ve been trying to avoid. +不幸的是,给 `circle` 设置 `background` 后会显示一个矩形,这是我们一直试图避免的事情。 ![](https://codropspz-tympanus.netdna-ssl.com/codrops/wp-content/uploads/2018/11/cssshapes_circle3.jpg) -We can clearly see the text flowing around it, yet the element itself doesn’t have a shape. If we want to actually display our shape functions, we’ll have to use the [`clip-path`](https://tympanus.net/codrops/css_reference/clip-path/) property. `clip-path` takes many of the same values as `shape-outside`, so we can give it the same `circle()` value. +我们可以清晰地看到文本在它周围环绕,但元素本身没有形状。如果我们想要真实地显示形状函数,需要使用 [`clip-path`](https://tympanus.net/codrops/css_reference/clip-path/) 属性。`clip-path` 采用许多和 `shape-outside` 相同的值,因此我们可以给它同样的 `circle()` 值。 ``` .circle { @@ -101,10 +101,11 @@ We can clearly see the text flowing around it, yet the element itself doesn’t ![](https://codropspz-tympanus.netdna-ssl.com/codrops/wp-content/uploads/2018/11/cssshapes_circle4.jpg) -> For the rest of the article, I’ll use `clip-path` to help us identify the shapes. +> 在本文剩下的部分,我将使用 `clip-path` 帮助我们辨认形状。 -The `circle()` function takes an optional parameter of radius. In our case, the default radius (_r_) is `50%`, or `100px`. Using `circle(50%)` or `circle(100px)` would produce the same result as what we’ve already done. -You might notice the text is right up against the shape. We can use the [`shape-margin`](https://tympanus.net/codrops/css_reference/shape-margin/) property to add a margin to the shape, which can be set in `px`, `em`, `%`, and any other standard CSS unit of measurement. +`circle()` 函数接收可选的 radius 参数。在本例中,默认 radius 是 `50%` 或者 `100px`。使用 `circle(50%)` 或者 `circle(100px)` 都将产生和我们已经完成样例的同样结果。 + +你可能注意到文本刚好和形状贴合。我们可以使用 [`shape-margin`](https://tympanus.net/codrops/css_reference/shape-margin/) 属性给形状添加 margin,单位可以是 `px`、`em`、`%` 和其他标准的 CSS 测量单位。 ``` .circle { @@ -118,27 +119,27 @@ You might notice the text is right up against the shape. We can use the [`shape- } ``` -Here is an example of a `25%` `circle()` radius with a `shape-margin` applied. +这里有个 `circle` radius 设置 `25%` 并且使用 `shape-margin` 的例子。 ![](https://codropspz-tympanus.netdna-ssl.com/codrops/wp-content/uploads/2018/11/cssshapes_circle5.jpg) -In addition to the radius, a shape function can take a position using `at`. The default position is the center of the circle, so `circle()` would explicitly be written as `circle(50% at 50% 50%)` or `circle(100px at 100px 100px)`, with the two values being the horizontal and vertical positions, respectively. +除了 radius,形状函数可以使用 `at` 定位。默认位置是圆心,因此 `circle()` 也可以被显式设置为 `circle(50% at 50% 50%)` 或 `circle(100px at 100px 100px)`,两个值分别是水平和垂直位置。 -To make it obvious how the positioning works, we could set the horizontal position value to `0` to make a perfect semi-circle. +为了搞清楚 position 的作用,我们可以设置水平位置值为 `0` 来创造一个完美的半圆。 ``` circle(50% at 0 50%); ``` -This coordinate positioning system is known as the reference box. +该坐标定位系统称为引用框。 ![](https://codropspz-tympanus.netdna-ssl.com/codrops/wp-content/uploads/2018/11/cssshapes_circle6.jpg) -Later on, we’ll learn how to use an image instead of a shape or gradient. For now, we’ll move on the to the next shape function. +稍后,我们将学习如何使用图像代替形状或者渐变。现在,我们将继续进行下一个形状函数。 -### Ellipse +### 椭圆 -Similar to the `circle()` function is the `ellipse()`, which creates an oval. To demonstrate, we can create an `ellipse` element and class. +`ellipse()` 和 `circle()` 函数类似,只是它会创造椭圆。为了演示,我们创建一个 `ellipse` 元素和样式。 ```
@@ -156,21 +157,21 @@ Similar to the `circle()` function is the `ellipse()`, which creates an oval. To } ``` -This time, we set a different `height` and `width` to make a vertically elongated oval. +这次,我们设置不同的 `height` 和 `width` 创建一个垂直拉长的椭圆。 ![](https://codropspz-tympanus.netdna-ssl.com/codrops/wp-content/uploads/2018/11/cssshapes_ellipse1.jpg) -The difference between an `ellipse()` and a `circle()` is that an ellipse has two radii – _r_x and _r_y, or the X-axis radius and Y-axis radius. Therefore, the above example can also be written as: +`ellipse()` 和 `circle()` 的区别在于椭圆有两个半径 —— `_r_x` 和 `_r_y`,或者 X 轴半径和 Y 轴半径。因此,上面的例子也可以写成: ``` ellipse(75px 150px); ``` -The position parameters are the same for circles and ellipses. The radii, in addition to being a unit of measurement, also include the options of `farthest-side` and `closest-side`. +circles 和 ellipses 的位置参数是一样的。除了是测量单位,半径也包括 `farthest-side` 和 `closest-side` 的选项。 -`closest-side` refers to the length from the center to closest side of the reference box, and conversely, `farthest-side` refers to the length from the center to the farthest side of the reference box. This means that these two values have no effect if a position other than default isn’t set. +`closest-side` 代表引用框的中心到最近侧的长度,相反,`farthest-side` 代表引用框中心到最远侧的长度。这意味着如果未设置默认值以外的位置,则这两个值无效。 -Here is a demonstration of the difference of flipping `closest-side` and `farthest-side` on an `ellipse()` with a `25%` offset on the X and Y axes. +这里演示了在 `ellipse()` 上翻转 `closest-side` 和 `farthest-side` 的区别,它的 X 和 Y 轴的偏移量是 `25%`。 ``` ellipse(farthest-side closest-side at 25% 25%) @@ -184,9 +185,9 @@ ellipse(farthest-side closest-side at 25% 25%) ![](https://codropspz-tympanus.netdna-ssl.com/codrops/wp-content/uploads/2018/11/cssshapes_ellipse3.jpg) -### Inset +### 内嵌 -So far we’ve been only been dealing with round shapes, but we can define inset rectangles with the `inset()` function. +目前为止我们只处理了圆形,但是我们可以使用 `inset()` 函数定义内嵌矩形。 ```
@@ -204,15 +205,15 @@ So far we’ve been only been dealing with round shapes, but we can define inset } ``` -In this example, we’ll create a `300px` by `300px` rectangle, and inset it by `75px` on all sides. This will leave us with a `150px` by `150px` with `75px` of space around it. +在本例中,我们创造了一个 `300px` 的正方形,每条边内嵌 `75px`。这将给我们留下 `150px` 周围有 `75px` 空间。 ![](https://codropspz-tympanus.netdna-ssl.com/codrops/wp-content/uploads/2018/11/cssshapes_inset1.jpg) -We can see that the rectangle is inset, and the text ignores the inset area. +我们可以看到矩形是内嵌的,文本忽略了内嵌区域。 ![](https://codropspz-tympanus.netdna-ssl.com/codrops/wp-content/uploads/2018/11/cssshapes_inset2.jpg) -An `inset()` shape can also take a `border-radius` with the `round` parameter, and the text will respect the rounded corners, such as this example with a `25px` on all sides and `75px` rounding. +`inset()` 形状也可以使用 `round` 参数接收 `border-radius`,并且文本会识别圆角,就像本例中所有边都是 `25px` 内嵌和 `75px` 圆角。 ``` inset(25px round 75px) @@ -220,13 +221,13 @@ inset(25px round 75px) ![](https://codropspz-tympanus.netdna-ssl.com/codrops/wp-content/uploads/2018/11/cssshapes_inset3.jpg) -Like `padding` or `margin` shorthand, the inset value will accept `top` `right` `bottom` `left` values in clockwise order (`inset(25px 25px 25px 25px)`), and only using a single value will make all four sides the same (`inset(25px)`). +像 `padding` 或 `margin` 简写,inset 值以顺时针方式(`inset(25px 25px 25px 25px)`)接收 `top` `right` `bottom` `left`,并且只传一个值将使四条边都相同(`inset(25px)`)。 -### Polygon +### 多边形 -The most interesting and flexible of the shape functions is the `polygon()`, which can take an array of `x` and `y` points to make any complex shape. Each item in the array represents _x_i _y_i, and would be written as `polygon(x1 y1, x2 y2, x3 y3...)` and so on. +形状函数最有趣和灵活的是 `polygon()`,它可以采用一系列 `x` 和 `y` 点来制作任何复杂形状。数组里的每个元素代表 _x_i _y_i,将被写成 `polygon(x1 y1, x2 y2, x3 y3...)` 等等。 -The fewest amount of point sets we can apply to a polygon is three, which will create a triangle. +我们可以为多边形设置的点集数量最少为 3,这将创建一个三角形。 ```
@@ -244,11 +245,11 @@ The fewest amount of point sets we can apply to a polygon is three, which will c } ``` -In this shape, the first point is `0 0`, the top left most point in the `div`. The second point is `0 300px`, which is the bottom left most point in the `div`. The third and final point is `200px 300px`, which is 2/3rd across the X axis and still at the bottom. The resulting shape looks like this: +在这个形状中,第一个点是 `0 0`,`div` 中最左上角的点。第二个点是 `0 300px`,它是 `div` 中最左下角的点。第三个也就是最后一个点是 `200px 300px`,它在 X 轴的 2/3 处并且也在底部。最终的形状是这样: ![](https://codropspz-tympanus.netdna-ssl.com/codrops/wp-content/uploads/2018/11/cssshapes_polygon1.jpg) -An interesting usage of the `polygon()` shape function is that text content can flow between two or more shapes. Since the `polygon()` shape is so flexible and dynamic, this is one of the biggest opportunities to make truly unique, magazine-esque layouts. In this example, we’ll put some text between two polygon shapes. +`polygon()` 形状函数的一个有趣用法是文本内容可以在两个或以上形状中环绕。因为 `polygon()` 形状是如此灵活和动态,这给我们制作真正独特的杂志式布局提供了一个最好机会。在本例中,我们将把文本放在两个多边形中。 ```
@@ -278,13 +279,13 @@ An interesting usage of the `polygon()` shape function is that text content can ![](https://codropspz-tympanus.netdna-ssl.com/codrops/wp-content/uploads/2018/11/cssshapes_polygon2.jpg) -Obviously, it would be very difficult to try to create your own complex shapes manually. Fortunately, there are several tools you can use to create polygons. Firefox has a built in editor for shapes, which you can use by clicking on the polygon shape in the Inspector. +显然,想要手动创造你自己的复杂形状是非常困难的。幸运的是,你可以用一些工具来创建多边形。Firefox 有一个内置的形状编辑器,你可以在 Inspector 中通过点击多边形使用。 ![](https://codropspz-tympanus.netdna-ssl.com/codrops/wp-content/uploads/2018/11/cssshapes_polygon3.jpg) -And for now, Chrome has some extensions you can use, such as [CSS Shapes Editor](https://chrome.google.com/webstore/detail/css-shapes-editor/nenndldnbcncjmeacmnondmkkfedmgmp?hl=en-US). +目前,Chrome 有一些你可以使用的扩展程序,比如 [CSS Shapes Editor](https://chrome.google.com/webstore/detail/css-shapes-editor/nenndldnbcncjmeacmnondmkkfedmgmp?hl=en-US)。 -Polygons can be used to cut out shapes around images or other elements. In another example, we can create a drop cap by drawing a polygon around a large letter. +多边形可以用来剪切图像或其他元素周围的形状。在另一个例子中,我们可以通过在大字母周围绘制多边形来创建首字下沉。 ```
R
@@ -308,18 +309,18 @@ Polygons can be used to cut out shapes around images or other elements. In anoth ## URLs -An exciting feature of CSS Shapes is that you don’t always have to explicitly define the shape with a shape function; you can also use a url of a semi-transparent image to define a shape, and the text will automatically flow around it. +CSS Shapes 一个令人激动的特性是你不必每次都通过形状函数明确定义;你也可以使用半透明图像的 url 来定义形状,这样文本就会自动环绕它。 -It’s important to note that the image used must be [CORS](https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS) compatible, otherwise you’ll get an error like one below. +重要的是要注意图像使用必须要兼容 [CORS](https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS),否则你将会遇到如下错误。 ``` Access to image at 'file:///users/tania/star.png' from origin 'null' has been blocked by CORS policy: The response is invalid. ``` -Serving an image on a server from the same server will ensure you don’t get that error. +在同一个服务器上提供图像将会保证你不会遇到上面的错误。 -Unlike in the other examples, we’re going to use an `img` tag instead of a `div`. This time the CSS is simple – just put the `url()` into the `shape-outside` property, like you would with `background-image`. +与其他例子不同,我们将使用 `img` 代替 `div`。这次的 CSS 很简单——只用把 `url()` 放进 `shape-outside` 属性,就像 `background-image` 一样。 ``` @@ -337,15 +338,15 @@ Unlike in the other examples, we’re going to use an `img` tag instead of a `di ![](https://codropspz-tympanus.netdna-ssl.com/codrops/wp-content/uploads/2018/11/cssshapes_image1.jpg) -Since the image I used was a star with a transparent background, the text knew which areas were transparent and which were opaque, and aligned itself accordingly. +因为我使用了透明背景的星星图像,文本知道哪些区域是透明的哪些是不透明的,并进行自适应布局。 -## Gradients +## 渐变 -Finally, a gradient can also be used as a shape. Gradients are the same as images, and just like the image example we used above, the text will know to flow around the transparent part. +最后,渐变也可以用来当成形状。渐变和图像一样,就像我们上面用到的图像例子,文本也将知道在透明部分环绕。 -We’re going to use one new property with gradients – the [`shape-image-threshold`](https://tympanus.net/codrops/css_reference/shape-image-threshold/). The `shape-image-threshold` defines the alpha channel threshold of a shape, or what percent of the image can be transparent vs. opaque. +我们将使用渐变的一个新属性 —— [`shape-image-threshold`](https://tympanus.net/codrops/css_reference/shape-image-threshold/)。`shape-image-threshold` 定义形状的 alpha 通道阈值,或者图像透明的百分比值。 -I’m going to make a gradient example that’s a 50%/50% split of a color and transparent, and set a `shape-image-threshold` of `.5`, meaning all pixels that are over 50% opaque should be considered part of the image. +我们将制作一个渐变例子,它是 50%/50% 的颜色和透明分割,并且设置 `shape-image-threshold` 为 `.5`,意味着超过 50% 不透明的所有像素都应被视为图像的一部分。 ```
@@ -365,13 +366,13 @@ I’m going to make a gradient example that’s a 50%/50% split of a color and t ![](https://codropspz-tympanus.netdna-ssl.com/codrops/wp-content/uploads/2018/11/cssshapes_gradient1.jpg) -We can see the gradient is perfectly split diagonally at the center of opaque and transparent. +我们可以看到渐变在不透明和透明的中心对角线完美分割。 -## Conclusion +## 结论 -In this article, we learned about `shape-outside`, `shape-margin`, and `shape-image-threshold`, three properties of CSS Shapes. We also learned how to use the function values to create circles, ellipses, inset rectangles, and complex polygons that text can flow around, and demonstrated how shapes can detect the transparent parts of images and gradients. +在本文中,我们学习了 CSS Shapes 的三个属性 `shape-outside`、`shape-margin` 和 `shape-image-threshold`。我们也了解到如何使用函数值创建可供文本环绕的圆、椭圆、内嵌矩形以及复杂的多边形,并且演示了形状如何检测图像和渐变的透明部分。 -**You can find all examples of this article in the following [demo](http://tympanus.net/Tutorials/CSSShapes/). You can also [download the source files](http://tympanus.net/Tutorials/CSSShapes/CSSShapes.zip).** +**你可以在如下 [demo](http://tympanus.net/Tutorials/CSSShapes/) 中找到本文中用到的所有例子,也可以[下载源文件](http://tympanus.net/Tutorials/CSSShapes/CSSShapes.zip)。** > 如果发现译文存在错误或其他需要改进的地方,欢迎到 [掘金翻译计划](https://github.com/xitu/gold-miner) 对译文进行修改并 PR,也可获得相应奖励积分。文章开头的 **本文永久链接** 即为本文在 GitHub 上的 MarkDown 链接。 From 26c8a7b24ef7fcc31cab3ca51e32a75a505d64f5 Mon Sep 17 00:00:00 2001 From: LeviDing Date: Fri, 11 Jan 2019 16:29:49 +0800 Subject: [PATCH 43/54] Create front-end-performance-checklist-2019-pdf-pages-1.md --- ...-performance-checklist-2019-pdf-pages-1.md | 196 ++++++++++++++++++ 1 file changed, 196 insertions(+) create mode 100644 TODO1/front-end-performance-checklist-2019-pdf-pages-1.md diff --git a/TODO1/front-end-performance-checklist-2019-pdf-pages-1.md b/TODO1/front-end-performance-checklist-2019-pdf-pages-1.md new file mode 100644 index 00000000000..7b5f95f248b --- /dev/null +++ b/TODO1/front-end-performance-checklist-2019-pdf-pages-1.md @@ -0,0 +1,196 @@ +> * 原文地址:[Front-End Performance Checklist 2019 — 1](https://www.smashingmagazine.com/2019/01/front-end-performance-checklist-2019-pdf-pages/) +> * 原文作者:[Vitaly Friedman](https://www.smashingmagazine.com/author/vitaly-friedman) +> * 译文出自:[掘金翻译计划](https://github.com/xitu/gold-miner) +> * 本文永久链接:[https://github.com/xitu/gold-miner/blob/master/TODO1/front-end-performance-checklist-2019-pdf-pages-1.md](https://github.com/xitu/gold-miner/blob/master/TODO1/front-end-performance-checklist-2019-pdf-pages-1.md) +> * 译者: +> * 校对者: + +# Front-End Performance Checklist 2019 — 1 + +Let’s make 2019... fast! An annual front-end performance checklist, with everything you need to know to create fast experiences today. Updated since 2016. + +![](https://d33wubrfki0l68.cloudfront.net/07bab6a876338626943c46d654f45aabe7e0e807/47054/images/drop-caps/w.svg) ![](https://d33wubrfki0l68.cloudfront.net/af7798a3ff2553a4ee42f928f6cb9addbfc6de6f/0f7b2/images/drop-caps/character-15.svg) **Web performance is a tricky beast, isn’t it? How do we actually** know where we stand in terms of performance, and what our performance bottlenecks _exactly_ are? Is it expensive JavaScript, slow web font delivery, heavy images, or sluggish rendering? Is it worth exploring tree-shaking, scope hoisting, code-splitting, and all the fancy loading patterns with intersection observer, server push, clients hints, HTTP/2, service workers and — oh my — edge workers? And, most importantly, **where do we even start improving performance** and how do we establish a performance culture long-term? + +Back in the day, performance was often a mere _afterthought_. Often deferred till the very end of the project, it would boil down to minification, concatenation, asset optimization and potentially a few fine adjustments on the server’s `config` file. Looking back now, things seem to have changed quite significantly. + +Performance isn’t just a technical concern: it matters, and when baking it into the workflow, design decisions have to be informed by their performance implications. **Performance has to be measured, monitored and refined continually**, and the growing complexity of the web poses new challenges that make it hard to keep track of metrics, because metrics will vary significantly depending on the device, browser, protocol, network type and latency (CDNs, ISPs, caches, proxies, firewalls, load balancers and servers all play a role in performance). + +So, if we created an overview of all the things we have to keep in mind when improving performance — from the very start of the process until the final release of the website — what would that list look like? Below you’ll find a (hopefully unbiased and objective) **front-end performance checklist for 2019** — an updated overview of the issues you might need to consider to ensure that your response times are fast, user interaction is smooth and your sites don’t drain user’s bandwidth. + +> **[译] [2019 前端性能优化年度总结 — 第一部分](https://github.com/xitu/gold-miner/blob/master/TODO1/front-end-performance-checklist-2019-pdf-pages-1.md)** +> [译] [2019 前端性能优化年度总结 — 第二部分](https://github.com/xitu/gold-miner/blob/master/TODO1/front-end-performance-checklist-2019-pdf-pages-2.md) +> [译] [2019 前端性能优化年度总结 — 第三部分](https://github.com/xitu/gold-miner/blob/master/TODO1/front-end-performance-checklist-2019-pdf-pages-3.md) +> [译] [2019 前端性能优化年度总结 — 第四部分](https://github.com/xitu/gold-miner/blob/master/TODO1/front-end-performance-checklist-2019-pdf-pages-4.md) +> [译] [2019 前端性能优化年度总结 — 第五部分](https://github.com/xitu/gold-miner/blob/master/TODO1/front-end-performance-checklist-2019-pdf-pages-5.md) +> [译] [2019 前端性能优化年度总结 — 第六部分](https://github.com/xitu/gold-miner/blob/master/TODO1/front-end-performance-checklist-2019-pdf-pages-6.md) + +#### Table Of Contents + +- [Getting Ready: Planning And Metrics](#getting-ready-planning-and-metrics) + - [1. Establish a performance culture](#1-establish-a-performance-culture) + - [2. Goal: Be at least 20% faster than your fastest competitor](#2-goal-be-at-least-20-faster-than-your-fastest-competitor) + - [3. Choose the right metrics](#3-choose-the-right-metrics) + - [4. Gather data on a device representative of your audience](#4-gather-data-on-a-device-representative-of-your-audience) + - [5. Set up "clean" and "customer" profiles for testing](#5-set-up-%22clean%22-and-%22customer%22-profiles-for-testing) + - [6. Share the checklist with your colleagues.](#6-share-the-checklist-with-your-colleagues) + +### Getting Ready: Planning And Metrics + +Micro-optimizations are great for keeping performance on track, but it’s critical to have clearly defined targets in mind — _measurable_ goals that would influence any decisions made throughout the process. There are a couple of different models, and the ones discussed below are quite opinionated — just make sure to set your own priorities early on. + +#### 1. Establish a performance culture + +In many organizations, front-end developers know exactly what common underlying problems are and what loading patterns should be used to fix them. However, as long as there is no established endorsement of the performance culture, each decision will turn into a battlefield of departments, breaking up the organization into silos. You need a business stakeholder buy-in, and to get it, you need to establish a case study on how speed benefits metrics and Key Performance Indicators (_KPIs_) they care about. + +Without a strong alignment between dev/design and business/marketing teams, performance isn’t going to sustain long-term. Study common complaints coming into customer service and see how improving performance can help relieve some of these common problems. + +Run performance experiments and measure outcomes — both on mobile and on desktop. It will help you build up a company-tailored case study with real data. Furthermore, using data from case studies and experiments published on [WPO Stats](https://wpostats.com/) will help increase sensitivity for business about why performance matters, and what impact it has on user experience and business metrics. Stating that performance matters alone isn’t enough though — you also need to establish some measurable and trackable goals and observe them. + +How to get there? In her talk on [Building Performance for the Long Term](https://vimeo.com/album/4970467/video/254947097), Allison McKnight shares a comprehensive case-study of how she helped establish a performance culture at Etsy ([slides](https://speakerdeck.com/aemcknig/building-performance-for-the-long-term)). + +[![Brad Frost and Jonathan Fielding’s Performance Budget Calculator](https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_auto/w_400/https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/7191d628-f0a1-490c-afca-c8abcdfd4823/brad-perf-budget-builder.png)](http://bradfrost.com/blog/post/performance-budget-builder/) + +[Performance budget builder](http://bradfrost.com/blog/post/performance-budget-builder/) by Brad Frost and Jonathan Fielding’s [Performance Budget Calculator](http://www.performancebudget.io/) can help you set up your performance budget and visualize it. ([Large preview](https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/7191d628-f0a1-490c-afca-c8abcdfd4823/brad-perf-budget-builder.png)) + +#### 2. Goal: Be at least 20% faster than your fastest competitor + +According to [psychological research](https://www.smashingmagazine.com/2015/09/why-performance-matters-the-perception-of-time/#the-need-for-performance-optimization-the-20-rule), if you want users to feel that your website is faster than your competitor’s website, you need to be _at least_ 20% faster. Study your main competitors, collect metrics on how they perform on mobile and desktop and set thresholds that would help you outpace them. To get accurate results and goals though, first study your analytics to see what your users are on. You can then mimic the 90th percentile’s experience for testing. + +To get a good first impression of how your competitors perform, you can [use Chrome UX Report](https://web.dev/fast/chrome-ux-report) (_CrUX_, a ready-made RUM data set, [video introduction](https://vimeo.com/254834890) by Ilya Grigorik), [Speed Scorecard](https://www.thinkwithgoogle.com/feature/mobile/) (also provides a revenue impact estimator), [Real User Experience Test Comparison](https://ruxt.dexecure.com/compare) or [SiteSpeed CI](https://www.sitespeed.io/) (based on synthetic testing). + +**Note**: If you use [Page Speed Insights](https://developers.google.com/speed/pagespeed/insights/) (no, it isn’t deprecated), you can get CrUX performance data for specific pages instead of just the aggregates. This data can be much more useful for setting performance targets for assets like “landing page” or “product listing”. And if you are using CI to test the budgets, you need to make sure your tested environment matches CrUX if you used CrUX for setting the target (_thanks Patrick Meenan!_). + +Collect data, set up a [spreadsheet](http://danielmall.com/articles/how-to-make-a-performance-budget/), shave off 20%, and set up your goals (_performance budgets_) this way. Now you have something measurable to test against. If you’re keeping the budget in mind and trying to ship down just the minimal script to get a quick time-to-interactive, then you’re on a reasonable path. + +Need resources to get started? + +* Addy Osmani has written a very detailed write-up on [how to start performance budgeting](https://medium.com/@addyosmani/start-performance-budgeting-dabde04cf6a3), how to quantify the impact of new features and where to start when you are over budget. + +* Lara Hogan’s [guide on how to approach designs with a performance budget](http://designingforperformance.com/weighing-aesthetics-and-performance/#approach-new-designs-with-a-performance-budget) can provide helpful pointers to designers. + +* Jonathan Fielding’s [Performance Budget Calculator](http://www.performancebudget.io/), Brad Frost’s [Performance Budget Builder](https://codepen.io/bradfrost/full/EPQVBp/) and [Browser Calories](https://browserdiet.com/calories/) can aid in creating budgets (thanks to [Karolina Szczur](https://medium.com/@fox/talk-the-state-of-the-web-3e12f8e413b3) for the heads up). + +* Also, make both performance budget and current performance _visible_ by setting up dashboards with graphs reporting build sizes. There are many tools allowing you to achieve that: [SiteSpeed.io dashboard](https://www.peterhedenskog.com/blog/2015/04/open-source-performance-dashboard/) (open source), [SpeedCurve](http://speedcurve.com/) and [Calibre](https://calibreapp.com/) are just a few of them, and you can find more tools on [perf.rocks](http://perf.rocks/tools/). + +Once you have a budget in place, incorporate them into your build process [with Webpack Performance Hints and Bundlesize](https://web.dev/fast/incorporate-performance-budgets-into-your-build-tools), [Lightouse CI](https://web.dev/fast/using-lighthouse-ci-to-set-a-performance-budget), [PWMetrics](https://github.com/paulirish/pwmetrics) or [Sitespeed CI](https://www.sitespeed.io/) to enforce budgets on pull requests and provide a score history in PR comments. If you need something custom, you can use [webpagetest-charts-api](https://github.com/trulia/webpagetest-charts-api), an API of endpoints to build charts from WebPagetest results. + +For instance, just like [Pinterest](https://medium.com/@Pinterest_Engineering/a-one-year-pwa-retrospective-f4a2f4129e05), you could create a custom _eslint_ rule that disallows importing from files and directories that are known to be dependency-heavy and would bloat the bundle. Set up a listing of “safe” packages that can be shared across the entire team. + +Beyond performance budgets, think about critical customer tasks that are most beneficial to your business. Set and discuss acceptable **time thresholds for critical actions** and establish "UX ready" user timing marks that the entire organization has agreed on. In many cases, user journeys will touch on the work of many different departments, so alignment in terms of acceptable timings will help support or prevent performance discussions down the road. Make sure that additional costs of added resources and features are visible and understood. + +Also, as Patrick Meenan suggested, it’s worth to **plan out a loading sequence and trade-offs** during the design process. If you prioritize early on which parts are more critical, and define the order in which they should appear, you will also know what can be delayed. Ideally, that order will also reflect the sequence of your CSS and JavaScript imports, so handling them during the build process will be easier. Also, consider what the visual experience should be in "in-between"-states, while the page is being loaded (e.g. when web fonts aren’t loaded yet). + +_Planning, planning, planning._ It might be tempting to get into quick "low-hanging-fruits"-optimizations early on — and eventually it might be a good strategy for quick wins — but it will be very hard to keep performance a priority without planning and setting realistic, company-tailored performance goals. + +The difference between First Paint, First Contentful Paint, First Meaningful Paint, Visual Complete and Time To Interactive. [Large view](https://docs.google.com/presentation/d/1D4foHkE0VQdhcA5_hiesl8JhEGeTDRrQR4gipfJ8z7Y/present?slide=id.g21f3ab9dd6_0_33). Credit: [@denar90](https://docs.google.com/presentation/d/1D4foHkE0VQdhcA5_hiesl8JhEGeTDRrQR4gipfJ8z7Y/present?slide=id.g21f3ab9dd6_0_33) + +#### 3. Choose the right metrics + +[Not all metrics are equally important](https://speedcurve.com/blog/rendering-metrics/). Study what metrics matter most to your application: usually it will be related to how fast you can start render _most important pixels of your product_ and how quickly you can provide input responsiveness for these rendered pixels. This knowledge will give you the best optimization target for ongoing efforts. + +One way or another, rather than focusing on full page loading time (via _onLoad_ and _DOMContentLoaded_ timings, for example), prioritize page loading as perceived by your customers. That means focusing on a slightly different set of metrics. In fact, choosing the right metric is a process without obvious winners. + +Based on Tim Kadlec’s research and Marcos Iglesias’ notes in [his talk](https://docs.google.com/presentation/d/e/2PACX-1vTk8geAszRTDisSIplT02CacJybNtrr6kIYUCjW3-Y_7U9kYSjn_6TbabEQDnk9Ao8DX9IttL-RD_p7/pub?start=false&loop=false&delayms=10000&slide=id.g3ccc19d32d_0_98), traditional metrics could be grouped into a few sets. Usually, we’ll need all of them to get a complete picture of performance, and in your particular case some of them might be more important than others. + +* _Quantity-based metrics_ measure the number of requests, weight and a performance score. Good for raising alarms and monitoring changes over time, not so good for understanding user experience. + +* _Milestone metrics_ use states in the lifetime of the loading process, e.g. _Time To First Byte_ and _Time To Interactive_. Good for describing the user experience and monitoring, not so good for knowing what happens between the milestones. + +* _Rendering metrics_ provide an estimate of how fast content renders (e.g. _Start Render_ time, _Speed Index_). Good for measuring and tweaking rendering performance, but not so good for measuring when _important_ content appears and can be interacted with. + +* _Custom metrics_ measure a particular, custom event for the user, e.g. Twitter’s [Time To First Tweet](https://blog.alexmaccaw.com/time-to-first-tweet) and Pinterest’s [PinnerWaitTime](https://medium.com/@Pinterest_Engineering/driving-user-growth-with-performance-improvements-cfc50dafadd7). Good for describing the user experience precisely, not so good for scaling the metrics and comparing with with competitors. + +To complete the picture, we’d usually look out for useful metrics among all of these groups. Usually, the most specific and relevant ones are: + +* [First Meaningful Paint](https://developers.google.com/web/tools/lighthouse/audits/first-meaningful-paint) _(FMP)_ + + Provides the timing when primary content appears on the page, providing an insight into how quickly the server outputs _any_ data. Long FMP usually indicates JavaScript blocking the main thread, but could be related to back-end/server issues as well. + +* [Time to Interactive](https://calibreapp.com/blog/time-to-interactive/) _(TTI)_ + + The point at which layout has stabilized, key webfonts are visible, and the main thread is available enough to handle user input — basically the time mark when a user can interact with the UI. The key metrics for understanding how much _wait_ a user has to experience to use the site without a lag. + +* [First Input Delay](https://developers.google.com/web/updates/2018/05/first-input-delay) _(FID)_, or _Input responsiveness_ + + The time from when a user first interacts with your site to the time when the browser is actually able to respond to that interaction. Complements TTI very well as it describes the missing part of the picture: what happens when a user actually interacts with the site. Intended as a RUM metric only. There is a [JavaScript library](https://github.com/GoogleChromeLabs/first-input-delay) for measuring FID in the browser. + +* [Speed Index](https://dev.to/borisschapira/web-performance-fundamentals-what-is-the-speed-index-2m5i) + + Measures how quickly the page contents are visually populated; the lower the score, the better. The Speed Index score is computed based on the speed of visual progress, but it’s merely a computed value. It’s also sensitive to the viewport size, so you need to define a range of testing configurations that match your target audience (_thanks, [Boris](https://twitter.com/borisschapira)!_). + +* CPU time spent + + A metric that indicates how busy is the main thread with the processing of the payload. It shows how often and how long the main thread is blocked, working on painting, rendering, scripting and loading. High CPU time is a clear indicator of a _janky_ experience, i.e. when the user experiences a noticeable lag between their action and a response. With WebPageTest, you can [select "Capture Dev Tools Timeline" on the "Chrome" tab](https://deanhume.com/ten-things-you-didnt-know-about-webpagetest-org/) to expose the breakdown of the main thread as it runs on any device using WebPageTest. + +* [Ad Weight Impact](https://calendar.perfplanet.com/2017/measuring-adweight/) + + If your site depends on the revenue generated by advertising, it’s useful to track the weight of ad related code. Paddy Ganti’s [script](https://calendar.perfplanet.com/2017/measuring-adweight/) constructs two URLs (one normal and one blocking the ads), prompts the generation of a video comparison via WebPageTest and reports a delta. + +* Deviation metrics + + As [noted by Wikipedia engineers](https://phabricator.wikimedia.org/phame/live/7/post/117/performance_testing_in_a_controlled_lab_environment_-_the_metrics/), data of how much variance exists in your results could inform you how reliable your instruments are, and how much attention you should pay to deviations and outlers. Large variance is an indicator of adjustments needed in the setup. It also helps understand if certain pages are more difficult to measure reliably, e.g. due to third-party scripts causing significant variation. It might also be a good idea to track browser version to understand bumps in performance when a new browser version is rolled out. + +* [Custom metrics](https://speedcurve.com/blog/user-timing-and-custom-metrics/) + + Custom metrics are defined by your business needs and customer experience. It requires you to identify _important_ pixels, _critical_ scripts, _necessary_ CSS and _relevant_ assets and measure how quickly they get delivered to the user. For that one, you can monitor [Hero Rendering Times](https://speedcurve.com/blog/web-performance-monitoring-hero-times/), or use [Performance API](https://css-tricks.com/breaking-performance-api/), marking particular timestaps for events that are important for your business. Also, you can [collect custom metrics with WebPagetest](https://sites.google.com/a/webpagetest.org/docs/using-webpagetest/custom-metrics) by executing arbitrary JavaScript at the end of a test. + +Steve Souders has a [detailed explanation of each metric](https://speedcurve.com/blog/rendering-metrics/). It’s important to notice that while Time-To-Interactive is measured by running automated audits in the so-called _lab environment_, First Input Delay represents the _actual_ user experience, with _actual_ users experiencing a noticeable lag. In general, it’s probably a good idea to always measure and track both of them. + +Depending on the context of your application, preferred metrics might differ: e.g. for Netflix TV UI, [key input responsiveness, memory usage and TTI](https://medium.com/netflix-techblog/crafting-a-high-performance-tv-user-interface-using-react-3350e5a6ad3b) are more critical, and for Wikipedia, [first/last visual changes and CPU time spent metrics](https://phabricator.wikimedia.org/phame/live/7/post/117/performance_testing_in_a_controlled_lab_environment_-_the_metrics/) are more important. + +**Note**: both FID and TTI do not account for scrolling behavior; scrolling can happen independently since it’s off-main-thread, so for many content consumption sites these metrics might be much less important (_thanks, Patrick!_). + +[![](https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_auto/w_400/https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/5d80f91c-9807-4565-b616-a4735fcd4949/network-requests-first-input-delay.png)](https://twitter.com/__treo/status/1068163152783835136) + +User-centric performance metrics provide a better insight into the actual user experience. [First Input Delay](https://developers.google.com/web/updates/2018/05/first-input-delay) (FID) is a new metric that tries to achieve just that. ([Large preview](https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/5d80f91c-9807-4565-b616-a4735fcd4949/network-requests-first-input-delay.png)) + +#### 4. Gather data on a device representative of your audience + +To gather accurate data, we need to thoroughly choose devices to test on. It’s a good option to [choose a Moto G4](https://twitter.com/katiehempenius/statuses/1067969800205422593), a mid-range Samsung device, a good middle-of-the-road device like a Nexus 5X and a slow device like Alcatel 1X, perhaps in an [open device lab](https://www.smashingmagazine.com/2016/11/worlds-best-open-device-labs/). For testing on slower thermal-throttled devices, you could also get a Nexus 2, which costs just around $100. + +If you don’t have a device at hand, emulate mobile experience on desktop by testing on a throttled network (e.g. 150ms RTT, 1.5 Mbps down, 0.7 Mbps up) with a throttled CPU (5× slowdown). Eventually switch over to regular 3G, 4G and Wi-Fi. To make the performance impact more visible, you could even introduce [2G Tuesdays](https://www.theverge.com/2015/10/28/9625062/facebook-2g-tuesdays-slow-internet-developing-world) or set up a [throttled 3G network in your office](https://twitter.com/thommaskelly/status/938127039403610112) for faster testing. + +Keep in mind that on a mobile device, you should be expecting a 4×–5× slowdown compared to desktop machines. Mobile devices have different GPUs, CPU, different memory, different battery characteristics. While download times are critical for low-end networks, parse times are critical for phones with slow CPUs. In fact, parse times on mobile [are 36% higher than on desktop](https://github.com/GoogleChromeLabs/discovery/issues/1). So always [test on an average device](https://www.webpagetest.org/easy) — a device that is most representative of your audience. + +[![Introducing the slowest day of the week](https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_auto/w_400/https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/dfe1a4ec-2088-4e39-8a39-9f2010380a53/tuesday-2g-opt.png)](https://www.theverge.com/2015/10/28/9625062/facebook-2g-tuesdays-slow-internet-developing-world) + +Introducing the slowest day of the week. Facebook has introduced [2G Tuesdays](https://www.theverge.com/2015/10/28/9625062/facebook-2g-tuesdays-slow-internet-developing-world) to increase visibility and sensitivity of slow connections. ([Image source](http://www.businessinsider.com/facebook-2g-tuesdays-to-slow-employee-internet-speeds-down-2015-10?IR=T)) + +Luckily, there are many great options that help you automate the collection of data and measure how your website performs over time according to these metrics. Keep in mind that a good performance picture covers a set of performance metrics, [lab data and field data](https://developers.google.com/web/fundamentals/performance/speed-tools/): + +* **Synthetic testing tools** collect _lab data_ in a reproducible environment with predefined device and network settings (e.g. _Lighthouse_, _WebPageTest_) and +* **Real User Monitoring** (_RUM_) tools evaluate user interactions continuously and collect _field data_ (e.g. _SpeedCurve_, _New Relic_ — both tools provide synthetic testing, too). + +The former is particularly useful during _development_ as it will help you identify, isolate and fix performance issues while working on the product. The latter is useful for long-term _maintenance_ as it will help you understand your performance bottlenecks as they are happening live — when users actually access the site. + +By tapping into built-in RUM APIs such as [Navigation Timing](https://developer.mozilla.org/en-US/docs/Web/API/Navigation_timing_API), [Resource Timing](https://developer.mozilla.org/en-US/docs/Web/API/Resource_Timing_API), [Paint Timing](https://css-tricks.com/paint-timing-api/), [Long Tasks](https://w3c.github.io/longtasks/), etc., synthetic testing tools and RUM together provide a complete picture of performance in your application. You could use [PWMetrics](https://github.com/paulirish/pwmetrics), [Calibre](https://calibreapp.com), [SpeedCurve](https://speedcurve.com/), [mPulse](https://www.soasta.com/performance-monitoring/) and [Boomerang](https://github.com/yahoo/boomerang), [Sitespeed.io](https://www.sitespeed.io/), which all are great options for performance monitoring. Furthermore, with [Server Timing header](https://www.smashingmagazine.com/2018/10/performance-server-timing/), you could even monitor back-end and front-end performance all in one place. + +**Note**: It’s always a safer bet to choose [network-level throttlers](https://calendar.perfplanet.com/2016/testing-with-realistic-networking-conditions/), external to the browser, as, for example, DevTools has issues interacting with HTTP/2 push, due to the way it’s implemented (thanks, Yoav, Patrick!). For Mac OS, we can use [Network Link Conditioner](https://nshipster.com/network-link-conditioner/), for Windows [Windows Traffic Shaper](https://github.com/WPO-Foundation/win-shaper/releases), for Linux [netem](https://wiki.linuxfoundation.org/networking/netem), and for FreeBSD [dummynet](http://info.iet.unipi.it/~luigi/dummynet/). + +[![Lighthouse](https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_auto/w_400/https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/a85a91a7-fb37-4596-8658-a40c1900a0d6/lighthouse-screenshot.png)](https://developers.google.com/web/tools/lighthouse/) + +[Lighthouse](https://developers.google.com/web/tools/lighthouse/), a performance auditing tool integrated into DevTools. ([Large preview](https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/a85a91a7-fb37-4596-8658-a40c1900a0d6/lighthouse-screenshot.png)) + +#### 5. Set up "clean" and "customer" profiles for testing + +While running tests in passive monitoring tools, it’s a common strategy to turn off anti-virus and background CPU tasks, remove background bandwidth transfers and test with a clean user profile without browser extensions to avoid skewed results ([Firefox](https://developer.mozilla.org/en-US/docs/Mozilla/Firefox/Multiple_profiles), [Chrome](https://support.google.com/chrome/answer/2364824?hl=en&co=GENIE.Platform=Desktop)). + +However, it’s also a good idea to study which extensions your customers are using frequently, and test with a dedicated _"customer" profile_ as well. In fact, some extensions might have a [profound performance impact](https://twitter.com/denar90_/statuses/1065712688037277696) on your application, and if your users use them a lot, you might want to account for it up front. "Clean" profile results alone are overly optimistic and can be crushed in real-life scenarios. + +#### 6. Share the checklist with your colleagues. + +Make sure that the checklist is familiar to every member of your team to avoid misunderstandings down the line. Every decision has performance implications, and the project would hugely benefit from front-end developers properly communicating performance values to the whole team, so that everybody would feel responsible for it, not just front-end developers. Map design decisions against performance budget and the priorities defined in the checklist. + +> **[译] [2019 前端性能优化年度总结 — 第一部分](https://github.com/xitu/gold-miner/blob/master/TODO1/front-end-performance-checklist-2019-pdf-pages-1.md)** +> [译] [2019 前端性能优化年度总结 — 第二部分](https://github.com/xitu/gold-miner/blob/master/TODO1/front-end-performance-checklist-2019-pdf-pages-2.md) +> [译] [2019 前端性能优化年度总结 — 第三部分](https://github.com/xitu/gold-miner/blob/master/TODO1/front-end-performance-checklist-2019-pdf-pages-3.md) +> [译] [2019 前端性能优化年度总结 — 第四部分](https://github.com/xitu/gold-miner/blob/master/TODO1/front-end-performance-checklist-2019-pdf-pages-4.md) +> [译] [2019 前端性能优化年度总结 — 第五部分](https://github.com/xitu/gold-miner/blob/master/TODO1/front-end-performance-checklist-2019-pdf-pages-5.md) +> [译] [2019 前端性能优化年度总结 — 第六部分](https://github.com/xitu/gold-miner/blob/master/TODO1/front-end-performance-checklist-2019-pdf-pages-6.md) + +> 如果发现译文存在错误或其他需要改进的地方,欢迎到 [掘金翻译计划](https://github.com/xitu/gold-miner) 对译文进行修改并 PR,也可获得相应奖励积分。文章开头的 **本文永久链接** 即为本文在 GitHub 上的 MarkDown 链接。 + + +--- + +> [掘金翻译计划](https://github.com/xitu/gold-miner) 是一个翻译优质互联网技术文章的社区,文章来源为 [掘金](https://juejin.im) 上的英文分享文章。内容覆盖 [Android](https://github.com/xitu/gold-miner#android)、[iOS](https://github.com/xitu/gold-miner#ios)、[前端](https://github.com/xitu/gold-miner#前端)、[后端](https://github.com/xitu/gold-miner#后端)、[区块链](https://github.com/xitu/gold-miner#区块链)、[产品](https://github.com/xitu/gold-miner#产品)、[设计](https://github.com/xitu/gold-miner#设计)、[人工智能](https://github.com/xitu/gold-miner#人工智能)等领域,想要查看更多优质译文请持续关注 [掘金翻译计划](https://github.com/xitu/gold-miner)、[官方微博](http://weibo.com/juejinfanyi)、[知乎专栏](https://zhuanlan.zhihu.com/juejinfanyi)。 From 9077a7a8f12fee1f3160f74088ef907ed46926ad Mon Sep 17 00:00:00 2001 From: Sam Date: Sat, 12 Jan 2019 15:02:20 +0800 Subject: [PATCH 44/54] =?UTF-8?q?=E6=88=91=E4=BB=AC=E9=87=87=E7=94=A8=20Gr?= =?UTF-8?q?aphQL=20=E6=8A=80=E6=9C=AF=E7=9A=84=E7=BB=8F=E9=AA=8C=EF=BC=9A?= =?UTF-8?q?=E8=90=A5=E9=94=80=E6=8A=80=E6=9C=AF=E6=B4=BB=E5=8A=A8=20(#4952?= =?UTF-8?q?)?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * Update our-learnings-from-adopting-graphql.md * 我们采用 GraphQL 技术的经验 * Update our-learnings-from-adopting-graphql.md --- TODO1/our-learnings-from-adopting-graphql.md | 81 ++++++++++---------- 1 file changed, 41 insertions(+), 40 deletions(-) diff --git a/TODO1/our-learnings-from-adopting-graphql.md b/TODO1/our-learnings-from-adopting-graphql.md index 181e578cd84..74ea6adeaab 100644 --- a/TODO1/our-learnings-from-adopting-graphql.md +++ b/TODO1/our-learnings-from-adopting-graphql.md @@ -2,97 +2,98 @@ > * 原文作者:[Netflix Technology Blog](https://medium.com/@NetflixTechBlog?source=post_header_lockup) > * 译文出自:[掘金翻译计划](https://github.com/xitu/gold-miner) > * 本文永久链接:[https://github.com/xitu/gold-miner/blob/master/TODO1/our-learnings-from-adopting-graphql.md](https://github.com/xitu/gold-miner/blob/master/TODO1/our-learnings-from-adopting-graphql.md) -> * 译者: -> * 校对者: +> * 译者:[Sam](https://github.com/xutaogit) +> * 校对者:[lianghx-319](https://github.com/lianghx-319) -# Our learnings from adopting GraphQL: A Marketing Tech Campaign +# 我们采用 GraphQL 技术的经验:营销技术活动 -In an [earlier blog post](https://github.com/xitu/gold-miner/blob/master/TODO1/https-medium-com-netflixtechblog-engineering-to-improve-marketing-effectiveness-part-2.md), we provided a high-level overview of some of the applications in the Marketing Technology team that we build to enable _scale and intelligence_ in driving our global advertising, which reaches users on sites like The New York Times, Youtube, and thousands of others. In this post, we’ll share our journey in updating our front-end architecture and our learnings in introducing GraphQL into the Marketing Tech system. +在[之前的博客文章](https://github.com/xitu/gold-miner/blob/master/TODO1/https-medium-com-netflixtechblog-engineering-to-improve-marketing-effectiveness-part-2.md)中,我们对营销技术团队的一些应用程序提供了高级概述,我们这么做是为了推动全球广告业务实现 **体量化和智能化**,使得广告可以通过像纽约时报,Youtube 等网站覆盖成千上万的用户。在这篇博文中,我们将分享关于我们更新前端架构的过程和在营销技术团队中引入 GraphQL 的经验。 -Our primary application for managing the creation and assembly of ads that reach the external publishing platforms is internally dubbed **_Monet_**. It’s used to supercharge ad creation and automate management of marketing campaigns on external ad platforms. Monet helps drive incremental conversions, engagement with our product and in general, present a rich story about our content and the Netflix brand to users around the world. To do this, first, it helps scale up and automate ad production and manage millions of creative permutations. Secondly, we utilize various signals and aggregate data such as understanding of content popularity on Netflix to enable highly relevant ads. Our overall aim is to make our ads on all the external publishing channels resonate well with users and we are constantly experimenting to improve our effectiveness in doing that. +我们用于管理创建和装配广告到外部部署平台的核心应用程序在我们内部被称为 Monet。它用于增强广告的创建和自动化管理在外部广告平台的营销活动。Monet 帮助推动增量转化,通常是和我们的产品进行交互,为全球各地的用户展示关于我们内容和 Netflix 品牌的精彩故事。为此,首先,它帮助扩展和自动化广告产品,并且管理数百万广告素材队列。其次,我们借用多种信号和汇总数据(例如了解在 Netflix 上的内容流行度)以实现高度相关的广告。我们总体目标是确保我们所有在外部发布频道的广告能够很好的引起用户的共鸣,并且我们不断尝试提高我们这么做的有效性。 ![](https://cdn-images-1.medium.com/max/800/0*CafLBZiEtz9uwO62) -Monet and the high-level _Marketing Technology_ flow +Monet 和高级**营销技术**流程 -When we started out, the React UI layer for Monet accessed traditional REST APIs powered by an Apache Tomcat server. Over time, as our application evolved, our use cases became more complex. Simple pages would need to draw in data from a wide variety of sources. To more effectively load this data onto the client application, we first attempted to denormalize data on the backend. Managing this denormalization became difficult to maintain since not all pages needed all the data. We quickly ran into network bandwidth bottlenecks. The browser would need to fetch much more denormalized data than it would ever use. +在我们开始的时候,Monet 的 React UI 层访问的是由 Apache Tomcat 服务提供的传统 REST API。随着时间的推移,我们应用程序的发展,我们的用例变得更加复杂。简单的页面需要从各种来源中获取数据。为了更加高效的在客户应用程序中加载这些数据,我们首先尝试在后端对数据进行非规范化处理。由于不是所有页面都需要所有这些数据,管理这些非规范化(的数据)变得难以维持。我们很快就遇到了网络带宽瓶颈。浏览器需要获取比以往更多的非规范化数据。 -To winnow down the number of fields sent to the client, one approach is to build custom endpoints for every page; it was a fairly obvious non-starter. Instead of building these custom endpoints, we opted for GraphQL as the middle layer of the app. We also considered [Falcor](https://netflix.github.io/falcor/) as a possible solution since it has delivered great results at Netflix in many core use cases and has a ton of usage, but a robust GraphQL ecosystem and powerful third party tooling made GraphQL the better option for our use case. Also, as our data structures have become increasingly graph-oriented, it ended up being a very natural fit. Not only did adding GraphQL solve the network bandwidth bottleneck, but it also provided numerous other benefits that helped us add features more quickly. +为了减少发送给客户端的字段数量,一种方法是为每个页面创建自定义端点;这是一个明显不切实际的想法。我们选择使用 GraphQL 作为我们应用的中间层,而不是创建这些自定义端点。我们也考虑过把 [Falcor](https://netflix.github.io/falcor/) 作为一个可能的解决方案,毕竟它在 Netflix 的很多用例中展现出很好的成果并且大量的使用,但是 GraphQL 健壮的生态体系和强大的第三方工具库使得 GraphQL 成为我们用例更好的选择。此外,随着我们数据结构越来越面向图形化,使用 GraphQL 最终适配会非常自然。添加 GraphQL 不仅解决了网络带宽瓶颈问题,而且还提供了许多其他优势,帮助我们更快地添加功能。 ![](https://cdn-images-1.medium.com/max/800/1*pmh-VimZJJindIJUyZtyzg.png) -Architecture before and after GraphQL +使用 GraphQL 架构的前后对比。 -### Benefits +### 优势 -We have been running GraphQL on NodeJS for about 6 months, and it has proven to significantly increase our development velocity and overall page load performance. Here are some of the benefits that worked out well for us since we started using it. +我们已经在 NodeJS 上运行 GraphQL 差不多六个月了,并且它已经被证实可以显著提高我们的开发速度和总体提升页面加载性能。这里是自从我们使用 GraphQL 实践给我们带来的一些好处。 -**Redistributing load and payload optimization** +**重新分配负载和有效负载优化** -Often times, some machines are better suited for certain tasks than others. When we added the GraphQL middle layer, the GraphQL server still needed to call the same services and REST APIs as the client would have called directly. The difference now is that the majority of the data is flowing between servers within the same data center. These server to server calls are of very low latency and high bandwidth, which gives us about an 8x performance boost compared to direct network calls from the browser. The last mile of the data transfer from the GraphQL server to the client browser, although still a slow point, is now reduced to a single network call. Since GraphQL allows the client to select only the data it needs we end up fetching a significantly smaller payload. In our application, pages that were fetching 10MB of data before now receive about 200KB. Page loads became much faster, especially over data-constrained mobile networks, and our app uses much less memory. These changes did come at the cost of higher server utilization to perform data fetching and aggregation, but the few extra milliseconds of server time per request were greatly outweighed by the smaller client payloads. +通常,某些机器比其他机器更适合做一些任务。当我们添加了 GraphQL 中间层时,GraphQL 服务器仍然需要调用和客户端直接调用的相同的服务和 REST API。现在的区别在于大多数据在同一数据中心的服务器之间流动。这些服务器和服务器之间的调用是非常低延迟和高带宽的,比起直接从浏览器发起网络请求有 8 倍的性能提升。从 GraphQL 服务器传送数据到客户浏览器的最后一段虽然仍是一个慢点,但至少减少成单个网络请求。由于 GraphQL 允许客户端只选择它需要的数据,所以我们最终可以获取明显更小的有效负载。在我们的应用程序中,页面之前要获取 10M 的数据,现在接收大约 200KB 即可。页面加载变得更快,特别是数据受限在移动网络上,并且我们的应用使用的内存更少。这些更改确实以提高服务器利用率为代价来执行数据获取和聚合,但是每个请求所花费的额外一点服务器毫秒时间远比不上更小的客户端有效负载。 -**Reusable abstractions** +**可复用的抽象** -Software developers generally want to work with reusable abstractions instead of single-purpose methods. With GraphQL, we define each piece of data once and define how it relates to other data in our system. When the consumer application fetches data from multiple sources, it no longer needs to worry about the complex business logic associated with data join operations. +软件开发者通常希望使用可复用的抽象而不是单一目的方法。使用 GraphQL,我们定义每段数据一次,并定义它与我们系统中其他数据之间的关系。当消费者应用程序从多个源获取数据时,它不再需要担心与数据连接操作相关联的复杂业务逻辑。 -Consider the following example, we define entities in GraphQL exactly once: _catalogs, creatives, and comments_. We can now build the views for several pages from these definitions. One page on the client app (catalogView) declares that it wants all comments for all creatives in a catalog while another client page (creativeView) wants to know the associated catalog that a creative belongs to, along with all of its comments. +考虑接下来的例子,我们在 GraphQL 中只定义实例一次:catalogs(类别)、creatives(素材)和 comments(评论)。现在我们可以由这些定义创建多个页面视图。客户端上的一个页面(类别视图)定义了它想要所有的评论和素材在一个分类里,而另一个客户端页面(素材视图)想要知道素材相关联的类别,以及所有和它相关的评论。 ![](https://cdn-images-1.medium.com/max/800/1*Tr-cnrbTOPKkWkshYpQeIA.png) -The flexibility of the GraphQL data model to represent different views from the same underlying data +GraphQL 数据模型的灵活性,用于表示来自相同底层数据的不同视图。 -The same graph model can power both of these views without having to make any server side code changes. +同一个 GraphQL 模型就可以满足上述两个视图的需求,而不用做任何服务器端的代码修改。 -**Chaining type systems** +**链式系统** -Many people focus on type systems within a single service, but rarely across services. Once we defined the entities in our GraphQL server, we use auto codegen tools to generate TypeScript types for the client application. The props of our React components receive types to match the query that the component is making. Since these types and queries are also validated against the server schema, any breaking change by the server would be caught by clients consuming the data. Chaining multiple services together with GraphQL and hooking these checks into the build process allows us to catch many more issues before deploying bad code. Ideally, we would like to have type safety from the database layer all the way to the client browser. +很多人专注于单一服务器里的类型系统,但很难有跨服务的。一旦我们在 GraphQL 服务器中定义了实例,我们使用自动代码生成器工具为客户端程序生成 TypeScript 类型。我们的 React 组件属性接收类别以匹配组件正在进行的查询。由于这些类别和查询也是针对服务器模式进行验证的,因此服务器的任何重大更改都将被使用该数据的客户端捕获。将多个服务器和 GraphQL 链接在一起并将这些检查挂载到构建过程中,可以让我们在发布代码前捕获更多错误。理想情况下,我们希望从数据层一直到客户端浏览器层都是具有类型安全性的。 ![](https://cdn-images-1.medium.com/max/800/1*YLL0aFFgcGDXFEa-V9_LPA.png) -Type safety from database to backend to client code +从数据库到后端到客户端代码的类型安全。 -**DI/DX — Simplifying development** +**DI/DX — 简化开发** -A common concern when creating client applications is the UI/UX, but the developer interface and developer experience is just as important for building maintainable apps. Before GraphQL, writing a new React container component required maintaining complex logic to make network requests for the data we need. The developer would need to consider how one piece of data relates to another, how the data should be cached, whether to make the calls in parallel or in sequence and where in Redux to store the data. With a GraphQL query wrapper, each React component only needs to describe the data it needs, and the wrapper takes care of all of these concerns. There is much less boilerplate code and a cleaner separation of concerns between the data and UI. This model of declarative data fetching makes the React components much easier to understand, and serves to partially document what the component is doing. +当创建客户端应用程序时普遍需要考虑 UI/UX,但是开发者界面和开发者体验一般只是侧重于构建可维护应用程序。在使用 GraphQL 之前,编写一个新的 React 包装组件需要维护复杂的逻辑,以便为我们所需的数据发起网络请求。开发者需要关心每部分数据之间的依赖,数据该怎么缓存,以及是否做并发或队列请求,还有在 Redux 的什么位置存储数据。使用 GraphQL 的查询封装(wrapper),每个 React 组件只需描述它所需要的数据,然后由封装(wrapper)去关心所有这些问题。这样就会是更少的引用代码和更清晰的数据与 UI 之间的关注点分离。这种定义数据获取的模块可以让 React 模块更容易理解,并且能够为部分描述文档提供服务知道组件具体在做什么。 -**Other benefits** +**其他优势** -There are a few other smaller benefits that we noticed as well. First, if any resolver of the GraphQL query fails, the resolvers that succeeded still return data to the client to render as much of the page as possible. Second, the backend data model is greatly simplified since we are less concerned with modeling for the client and in most cases can simply provide a CRUD interface to raw entities. Finally, testing our components has also become easier since the GraphQL query is automatically translatable into stubs for our tests and we can test resolvers in isolation from the React components. +我们也留意到其他一些小的优势。首先,如果任何 GraphQL 的查询解析器失败了,已经成功的解析器仍然会返回数据到客户端渲染出尽可能多的页面。其次,由于我们更少的关心客户端模型,后端数据模型就简化了很多,在大多数情况下,只需提供一个 CRUD 接口的原始实体。最后,基于 GraphQL 的查询会自动为我们的测试进行存根转变,测试我们的组件也会变得很简单,并且我们可以把解析器从 React 组件中独立出来进行测试。 -### Growing pains +### 使用痛点 -Our migration to GraphQL was a straightforward experience. Most of the infrastructure we built to make network requests and transform data was easily transferable from our React application to our NodeJS server without any code changes. We even ended up deleting more code than we added. But as with any migration to a new technology, there were a few obstacles we needed to overcome. +我们迁移到 GraphQL 是一个直截了当的过程。我们构建的大多数用于做网络请求和传输数据的基础架构在不做任何代码修改的情况下可以很容易在 React 应用和我们 NodeJS 服务之间做到可传递。我们甚至最终删除的代码比我们加的多。但是在迁移到任何新的技术这条路上,总会有一些需要我们越过的障碍。 -**Selfish resolvers** +**自私的解析器** -Since resolvers in GraphQL are meant to run as isolated units that are not concerned with what other resolvers do, we found that they were making many duplicate network requests for the same or similar data. We got around this duplication by wrapping the data providers in a simple caching layer that stored network responses in memory until all resolvers finished. The caching layer also allowed us to aggregate multiple requests to a single service into a bulk request for all the data at once. Resolvers can now request any data they need without worrying about how to optimize the process of fetching it. +由于 GraphQL 里的解析器定义为独立运行的单元,而不用关心其他解析器在做什么,我们发现他们会对相同或类似的数据发起很多重复的网络请求。我们通过将数据提供者包装在一个简单的缓存层中来避免这种重复,该缓存层将网络响应存储在内存中,直到所有解析器都完成。缓存层还允许我们将多个对单个服务的请求聚合为一次对所有数据的批量请求。解析器现在可以请求他们需要的任何数据,而不必担心如何优化获取数据的过程。 ![](https://cdn-images-1.medium.com/max/800/1*FZCtNPL4bXS6jpgVZx0RYg.png) -Adding a cache to simplify data access from resolvers +添加缓存以简化来自解析器的数据访问 -**What a tangled web we weave** +**我们编写的繁杂网络** -Abstractions are a great way to make developers more efficient… until something goes wrong. There will undoubtedly be bugs in our code and we didn’t want to obfuscate the root cause with a middle layer. GraphQL would orchestrate network calls to other services automatically, hiding the complexities from the user. Server logs provide a way to debug, but they are still one step removed from the natural approach of debugging via the browser’s network tab. To make debugging easier, we added logs directly to the GraphQL response payload that expose all of the network requests that the server is making. When the debug flag is enabled, you get the same data in the client browser as you would if the browser made the network call directly. +抽象是提高开发人员效率的好方法 —— 直到出现问题为止。毫无疑问,我们的代码中会有bug,我们不想用中间层混淆(bug 产生的)根本原因。GraphQL 将自动编排对其他服务的网络调用,对用户隐藏复杂性。虽然服务器日志提供了一种调试方法,但是它们仍然比通过浏览器的 network 选项卡进行调试的自然方法少了一步。为了让调试更简单,我们直接将日志添加到 GraphQL 响应有效负载中,它公开了服务器发出的所有网络请求。当启用调试标志时,你将在客户端浏览器中获得与浏览器直接进行网络调用时相同的数据。 -**Breaking down typing** +**拆分类型** -Passing around objects is what OOP is all about, but unfortunately, GraphQL throws a wrench into this paradigm. When we fetch partial objects, this data cannot be used in methods and components that require the full object. Of course, you can cast the object manually and hope for the best, but you lose many of the benefits of type systems. Luckily, TypeScript uses duck typing, so adjusting the methods to only require the object properties that they really need was the quickest fix. Defining these more precise types takes a bit more work, but gives greater type safety overall. +传递对象是面向对象编程(OOP)的全部,但不幸的是,GraphQL 将对这个范式造成冲击。当我们获取部分对象时,这些数据不能用于需要完整对象的方法和组件中。当然,你可以手动强制转换对象并抱着最好的希望,但是你将失去类型系统的许多好处。幸运的是,TypeScript 使用了 duck typing(译者注:鸭子类型,关注点在于对象的行为,能作什么;而不是关注对象所属的类型。[duck typing](https://en.wikipedia.org/wiki/Duck_typing)),所以只需要对它们真正需要的对象属性方法进行调整是最快的修复方式。虽然定义更精确的类型需要做更多的工作,但是总体上确保了更大的类型安全性。 -### What comes next +### 接下来是什么 -We are still in the early stages in our exploration of GraphQL, but it’s been a positive experience so far and we’re happy to have embraced it. One of the key goals of this endeavor was to help us get increased development velocity as our systems become increasingly sophisticated. Instead of being bogged down with complex data structures, we hope for the investment in the graph data model to make our team more productive over time as more edges and nodes are added. Even over the last few months, we have found that our existing graph model has become sufficiently robust that we don’t need any graph changes to be able to build some features. It has certainly made us more productive. +我们仍然处于探索 GraphQL 的早期阶段,但到目前为止都是一种积极的体验,我们很高兴能够接受它。这项工作的一个关键目标是,随着我们的系统变得越来越复杂,(它)帮助我们提高开发速度。我们不希望被复杂的数据结构所困,而是希望在图形数据模型上进行投资,随着时间的推移,随着更多的边缘和节点的添加,我们的团队会更加高效。甚至在过去的几个月里,我们已经发现我们现有的图形模型已经足够健壮,我们不需要任何图形更改就可以构建一些特性。它确实让我们变得更有效率。 ![](https://cdn-images-1.medium.com/max/800/1*T3KO2GOY6EhoWUdQw8zuLQ.png) -Visualization of our GraphQL Schema +我们的可视化 GraphQL 模型 -As GraphQL continues to thrive and mature, we look forward to learning from all the amazing things that the community can build and solve with it. On an implementation level, we are looking forward to using some cool concepts like schema stitching, which can make integrations with other services much more straightforward and save a great deal of developer time. Most crucially, it’s very exciting to see a lot more teams [across our company](https://medium.com/netflix-techblog/the-new-netflix-stethoscope-native-app-f4e1d38aafcd) see GraphQL’s potential and start to adopt it. +随着GraphQL的不断发展和成熟,我们期待可以从社区中使用它构建和解决的所有令人惊叹的东西中学习。在实现级别上,我们期待使用一些很酷的概念,比如模式缝合,它可以使与其他服务的集成更加直接,并节省大量开发人员的时间。最重要的是,我们很开心地看到在公司[很多团队](https://medium.com/netflix-techblog/the-new-netflix-stethoscope-native-app-f4e1d38aafcd)发现 GraphQL 的潜力并开始采用它。 -If you’ve made this thus far and you’re also interested in joining the Netflix Marketing Technology team to help conquer our unique challenges, check out the [open positions](https://sites.google.com/netflix.com/adtechjobs/ad-tech-engineering) listed on our page. **_We’re hiring!_** +如果你已经做到了这一点,并且你也有兴趣加入 Netflix 营销技术团队来帮助克服我们独特的挑战,看看我们页面上列出的[空缺职位](https://sites.google.com/netflix.com/adtechjobs/ad-tech-engineering)。 -> 如果发现译文存在错误或其他需要改进的地方,欢迎到 [掘金翻译计划](https://github.com/xitu/gold-miner) 对译文进行修改并 PR,也可获得相应奖励积分。文章开头的 **本文永久链接** 即为本文在 GitHub 上的 MarkDown 链接。 +**_我们正在招聘!_** +> 如果发现译文存在错误或其他需要改进的地方,欢迎到 [掘金翻译计划](https://github.com/xitu/gold-miner) 对译文进行修改并 PR,也可获得相应奖励积分。文章开头的 **本文永久链接** 即为本文在 GitHub 上的 MarkDown 链接。 --- From 4cac4dc96a3476fc94dd47270f9a27012164f6d6 Mon Sep 17 00:00:00 2001 From: LeviDing Date: Sat, 12 Jan 2019 15:03:53 +0800 Subject: [PATCH 45/54] Create front-end-performance-checklist-2019-pdf-pages-2.md --- ...-performance-checklist-2019-pdf-pages-2.md | 183 ++++++++++++++++++ 1 file changed, 183 insertions(+) create mode 100644 TODO1/front-end-performance-checklist-2019-pdf-pages-2.md diff --git a/TODO1/front-end-performance-checklist-2019-pdf-pages-2.md b/TODO1/front-end-performance-checklist-2019-pdf-pages-2.md new file mode 100644 index 00000000000..12539285482 --- /dev/null +++ b/TODO1/front-end-performance-checklist-2019-pdf-pages-2.md @@ -0,0 +1,183 @@ +> * 原文地址:[Front-End Performance Checklist 2019 — 2](https://www.smashingmagazine.com/2019/01/front-end-performance-checklist-2019-pdf-pages/) +> * 原文作者:[Vitaly Friedman](https://www.smashingmagazine.com/author/vitaly-friedman) +> * 译文出自:[掘金翻译计划](https://github.com/xitu/gold-miner) +> * 本文永久链接:[https://github.com/xitu/gold-miner/blob/master/TODO1/front-end-performance-checklist-2019-pdf-pages-2.md](https://github.com/xitu/gold-miner/blob/master/TODO1/front-end-performance-checklist-2019-pdf-pages-2.md) +> * 译者: +> * 校对者: + +# Front-End Performance Checklist 2019 — 2 + +Let’s make 2019... fast! An annual front-end performance checklist, with everything you need to know to create fast experiences today. Updated since 2016. + +> [译] [2019 前端性能优化年度总结 — 第一部分](https://github.com/xitu/gold-miner/blob/master/TODO1/front-end-performance-checklist-2019-pdf-pages-1.md) +> **[译] [2019 前端性能优化年度总结 — 第二部分](https://github.com/xitu/gold-miner/blob/master/TODO1/front-end-performance-checklist-2019-pdf-pages-2.md)** +> [译] [2019 前端性能优化年度总结 — 第三部分](https://github.com/xitu/gold-miner/blob/master/TODO1/front-end-performance-checklist-2019-pdf-pages-3.md) +> [译] [2019 前端性能优化年度总结 — 第四部分](https://github.com/xitu/gold-miner/blob/master/TODO1/front-end-performance-checklist-2019-pdf-pages-4.md) +> [译] [2019 前端性能优化年度总结 — 第五部分](https://github.com/xitu/gold-miner/blob/master/TODO1/front-end-performance-checklist-2019-pdf-pages-5.md) +> [译] [2019 前端性能优化年度总结 — 第六部分](https://github.com/xitu/gold-miner/blob/master/TODO1/front-end-performance-checklist-2019-pdf-pages-6.md) + +#### Table Of Contents + +- [Setting Realistic Goals](#setting-realistic-goals) + - [7. 100-millisecond response time, 60 fps.](#7-100-millisecond-response-time-60-fps) + - [8. Speed Index < 1250, TTI < 5s on 3G, Critical file size budget < 170KB (gzipped).](#8-speed-index--1250-tti--5s-on-3g-critical-file-size-budget--170kb-gzipped) +- [Defining The Environment](#defining-the-environment) + - [9. Choose and set up your build tools](#9-choose-and-set-up-your-build-tools) + - [10. Use progressive enhancement as a default.](#10-use-progressive-enhancement-as-a-default) + - [11. Choose a strong performance baseline](#11-choose-a-strong-performance-baseline) + - [12. Evaluate each framework and each dependency.](#12-evaluate-each-framework-and-each-dependency) + - [13. Consider using PRPL pattern and app shell architecture](#13-consider-using-prpl-pattern-and-app-shell-architecture) + - [14. Have you optimized the performance of your APIs?](#14-have-you-optimized-the-performance-of-your-apis) + - [15. Will you be using AMP or Instant Articles?](#15-will-you-be-using-amp-or-instant-articles) + - [16. Choose your CDN wisely](#16-choose-your-cdn-wisely) + +### Setting Realistic Goals + +#### 7. 100-millisecond response time, 60 fps. + +For an interaction to feel smooth, the interface has 100ms to respond to user’s input. Any longer than that, and the user perceives the app as laggy. The [RAIL, a user-centered performance model](https://www.smashingmagazine.com/2015/10/rail-user-centric-model-performance/) gives you healthy targets: To allow for <100 milliseconds response, the page must yield control back to main thread at latest after every <50 milliseconds. [Estimated Input Latency](https://developers.google.com/web/tools/lighthouse/audits/estimated-input-latency) tells us if we are hitting that threshold, and ideally, it should be below 50ms. For high-pressure points like animation, it’s best to do nothing else where you can and the absolute minimum where you can't. + +[![RAIL](https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_auto/w_400/https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/c91c910d-e934-4610-9dc5-369ec9071b57/rail-perf-model-opt.png)](https://developers.google.com/web/fundamentals/performance/rail) + +[RAIL](https://developers.google.com/web/fundamentals/performance/rail), a user-centric performance model. + +Also, each frame of animation should be completed in less than 16 milliseconds, thereby achieving 60 frames per second (1 second ÷ 60 = 16.6 milliseconds) — preferably under 10 milliseconds. Because the browser needs time to paint the new frame to the screen, your code should finish executing before hitting the 16.6 milliseconds mark. We’re starting having conversations about 120fps (e.g. iPad’s new screens run at 120Hz) and Surma has covered some [rendering performance solutions for 120fps](https://dassur.ma/things/120fps/), but that’s probably not a target we’re looking at _just yet_. + +Be pessimistic in performance expectations, but [be optimistic in interface design](https://www.smashingmagazine.com/2016/11/true-lies-of-optimistic-user-interfaces/) and [use idle time wisely](https://philipwalton.com/articles/idle-until-urgent/). Obviously, these targets apply to runtime performance, rather than loading performance. + +#### 8. Speed Index < 1250, TTI < 5s on 3G, Critical file size budget < 170KB (gzipped). + +Although it might be very difficult to achieve, a good ultimate goal would be First Meaningful Paint under 1 second and a [Speed Index](https://sites.google.com/a/webpagetest.org/docs/using-webpagetest/metrics/speed-index) value under 1250. Considering the baseline being a $200 Android phone (e.g. Moto G4) on a slow 3G network, emulated at 400ms RTT and 400kbps transfer speed, aim for [Time to Interactive under 5s](https://www.youtube.com/watch?v=_srJ7eHS3IM&feature=youtu.be&t=6m21s), and for repeat visits, aim for under 2s (achievable only with a service worker), + +Notice that, when speaking about interactivity metrics, it’s a good idea to [distinguish between First CPU Idle and Time To Interactive](https://calendar.perfplanet.com/2017/time-to-interactive-measuring-more-of-the-user-experience/) to avoid misunderstandings down the line. The former is the earliest point after the main content has rendered (where there is at least a 5-second window where the page is responsive). The latter is the point where the page can be expected to always be responsive to input (_thanks, Philip Walton!_). + +We have two major constraints that effectively shape a _reasonable_ target for speedy delivery of the content on the web. On the one hand, we have **network delivery constraints** due to [TCP Slow Start](https://hpbn.co/building-blocks-of-tcp/#slow-start). The first 14KB of the HTML is the >most critical payload chunk — and the only part of the budget that can be delivered in the first roundtrip (which is all you get in 1 sec at 400ms RTT due to mobile wake-up times). + +On the other hand, we have **hardware constraints** on memory and CPU due to JavaScript parsing times (we’ll talk about them in detail later). To achieve the goals stated in the first paragraph, we have to consider the critical file size budget for JavaScript. Opinions vary on what that budget should be (and it heavily depends on the nature of your project), but a budget of 170KB JavaScript gzipped already would take up to 1s to parse and compile on an average phone. Assuming that 170KB expands to 3× that size when decompressed (0.7MB), that already could be the death knell of a "decent" user experience on a Moto G4 or Nexus 2. + +Of course, your data might show that your customers are not on these devices, but perhaps they simply don’t show up in your analytics because your service is inaccessible to them due to slow performance. In fact, Google’s Alex Russels recommends to [aim for 130–170KB gzipped](https://infrequently.org/2017/10/can-you-afford-it-real-world-web-performance-budgets/) as a reasonable upper boundary, and exceeding this budget should be an informed and deliberate decision. In real-life world, most products aren’t even close: an average bundle size today is around [400KB](https://beta.httparchive.org/reports/state-of-javascript#bytesJs), which is up 35% compared to late 2015. On a middle-class mobile device, that accounts for 30-35 seconds for _Time-To-Interactive_. + +We could also go beyond the bundle size budget though. For example, we could set performance budgets based on the activities of the browser’s main thread, i.e. paint time before start render, or [track down front-end CPU hogs](https://calendar.perfplanet.com/2017/tracking-cpu-with-long-tasks-api/). Tools such as [Calibre](https://calibreapp.com/), [SpeedCurve](https://speedcurve.com/) and [Bundlesize](https://github.com/siddharthkp/bundlesize) can help you keep your budgets in check, and can be integrated into your build process. + +Also, a performance budget probably shouldn’t be a fixed value. Depending on the network connection, [performance budgets should adapt](https://twitter.com/katiehempenius/status/1075478356311924737), but payload on slower connection is much more "expensive", regardless of how they’re used. + +[![From 'Fast By Default: Modern Loading Best Practices' by Addy Osmani](https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_auto/w_400/https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/3bb4ab9e-978a-4db0-83c3-57a93d70516d/file-size-budget-fast-default-addy-osmani-opt.png)](https://speakerdeck.com/addyosmani/fast-by-default-modern-loading-best-practices) + +[From Fast By Default: Modern loading best practices](https://speakerdeck.com/addyosmani/fast-by-default-modern-loading-best-practices) by Addy Osmani (Slide 19) + +[![](https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_auto/w_400/https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/949e5601-04e7-48ee-91a5-10bd7af19a0f/perf-budgets-network-connection.jpg)](https://twitter.com/katiehempenius/status/1075478356311924737) + +Performance budgets should adapt depending on the network conditions for an average mobile device. (Image source: [Katie Hempenius](https://twitter.com/katiehempenius/status/1075478356311924737)) ([Large preview](https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/949e5601-04e7-48ee-91a5-10bd7af19a0f/perf-budgets-network-connection.jpg)) + +### Defining The Environment + +#### 9. Choose and set up your build tools + +[Don’t pay too much attention to what’s supposedly cool](https://24ways.org/2017/all-that-glisters/) [these days](https://2018.stateofjs.com/). Stick to your environment for building, be it Grunt, Gulp, Webpack, Parcel, or a combination of tools. As long as you are getting results you need and you have no issues maintaining your build process, you’re doing just fine. + +Among the build tools, Webpack seems to be the most established one, with literally hundreds of plugins available to optimize the size of your builds. Getting started with Webpack can be tough though. So if you want to get started, there are some great resources out there: + +* [Webpack documentation](https://webpack.js.org/concepts/) — obviously — is a good starting point, and so are [Webpack — The Confusing Bits](https://medium.com/@rajaraodv/webpack-the-confusing-parts-58712f8fcad9) by Raja Rao and [An Annotated Webpack Config](https://nystudio107.com/blog/an-annotated-webpack-4-config-for-frontend-web-development) by Andrew Welch. + +* Sean Learkin has a free course on [Webpack: The Core Concepts](https://webpack.academy/p/the-core-concepts) and Jeffrey Way has released a fantastic free course on [Webpack for everyone](https://laracasts.com/series/webpack-for-everyone). Both of them are great introductions for diving into Webpack. + +* [Webpack Fundamentals](https://frontendmasters.com/courses/webpack-fundamentals/) is a very comprehensive 4h course with Sean Larkin, released by FrontendMasters. + +* If you are slightly more advanced, Rowan Oulton has published a [Field Guide for Better Build Performance with Webpack](https://slack.engineering/keep-webpack-fast-a-field-guide-for-better-build-performance-f56a5995e8f1) and Benedikt Rötsch’s made a tremendous research on [putting Webpack bundle on a diet](https://www.contentful.com/blog/2017/10/27/put-your-webpack-bundle-on-a-diet-part-3/). + +* [Webpack examples](https://github.com/webpack/webpack/tree/master/examples) has hundreds of ready-to-use Webpack configurations, categorized by topic and purpose. Bonus: there is also a [Webpack config configurator](https://webpack.jakoblind.no/) that generates a basic configuration file. + +* [awesome-webpack](https://github.com/webpack-contrib/awesome-webpack) is a curated list of useful Webpack resources, libraries and tools, including articles, videos, courses, books and examples for Angular, React and framework-agnostic projects. +#### 10. Use progressive enhancement as a default. + +Keeping [progressive enhancement](https://www.aaron-gustafson.com/notebook/insert-clickbait-headline-about-progressive-enhancement-here/) as the guiding principle of your front-end architecture and deployment is a safe bet. Design and build the core experience first, and then enhance the experience with advanced features for capable browsers, creating [resilient](https://resilientwebdesign.com/) experiences. If your website runs fast on a slow machine with a poor screen in a poor browser on a sub-optimal network, then it will only run faster on a fast machine with a good browser on a decent network. + +#### 11. Choose a strong performance baseline + +With so many unknowns impacting loading — the network, thermal throttling, cache eviction, third-party scripts, parser blocking patterns, disk I/O, IPC latency, installed extensions, antivirus software and firewalls, background CPU tasks, hardware and memory constraints, differences in L2/L3 caching, RTTS — [JavaScript has the heaviest cost of the experience](https://medium.com/@addyosmani/the-cost-of-javascript-in-2018-7d8950fbb5d4), next to web fonts blocking rendering by default and images often consuming too much memory. With the performance bottlenecks [moving away from the server to the client](https://calendar.perfplanet.com/2017/tracking-cpu-with-long-tasks-api/), as developers, we have to consider all of these unknowns in much more detail. + +With a 170KB budget that already contains the critical-path HTML/CSS/JavaScript, router, state management, utilities, framework and the application logic, we have to thoroughly [examine network transfer cost, the parse/compile time and the runtime cost](https://www.twitter.com/kristoferbaxter/status/908144931125858304) of the framework of our choice. + +As [noted](https://twitter.com/sebmarkbage/status/829733454119989248) by Seb Markbåge, a good way to measure start-up costs for frameworks is to first render a view, then delete it and then render again as it can tell you how the framework scales. The first render tends to warm up a bunch of lazily compiled code, which a larger tree can benefit from when it scales. The second render is basically an emulation of how code reuse on a page affects the performance characteristics as the page grows in complexity. + +[!['Fast By Default: Modern Loading Best Practices' by Addy Osmani](https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_auto/w_400/https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/39c247a9-223f-4a6c-ae3d-db54a696ffcb/tti-budget-opt.png)](https://speakerdeck.com/addyosmani/fast-by-default-modern-loading-best-practices) + +From [Fast By Default: Modern Loading Best Practices](https://speakerdeck.com/addyosmani/fast-by-default-modern-loading-best-practices) by Addy Osmani (Slides 18, 19). + +#### 12. Evaluate each framework and each dependency. + +Now, [not every project needs a framework](https://twitter.com/jaffathecake/status/923805333268639744) and [not every page of a single-page-application needs to load a framework](https://medium.com/dev-channel/a-netflix-web-performance-case-study-c0bcde26a9d9). In Netflix’s case, "removing React, several libraries and the corresponding app code from the client-side reduced the total amount of JavaScript by over 200KB, causing [an over-50% reduction in Netflix’s Time-to-Interactivity](https://news.ycombinator.com/item?id=15567657) for the logged-out homepage." The team then utilized the time spent by users on the landing page to prefetch React for subsequent pages that users were likely to land on ([read on for details](https://jakearchibald.com/2017/netflix-and-react/)). + +It might sound obvious but worth stating: some projects can also benefit [benefit from removing an existing framework](https://twitter.com/jaffathecake/status/925320026411950080) altogether. Once a framework is chosen, you’ll be staying with it for at least a few years, so if you need to use one, make sure your choice [is informed](https://www.youtube.com/watch?v=6I_GwgoGm1w) and [well considered](https://medium.com/@ZombieCodeKill/choosing-a-javascript-framework-535745d0ab90#.2op7rjakk). + +Inian Parameshwaran [has measured performance footprint of top 50 frameworks](https://youtu.be/wVY3-acLIoI?t=699) (against [_First Contentful Paint_](https://developers.google.com/web/tools/lighthouse/audits/first-contentful-paint) — the time from navigation to the time when the browser renders the first bit of content from the DOM). Inian discovered that, out there in the wild, Vue and Preact are the fastest across the board — both on desktop and mobile, followed by React ([slides](https://drive.google.com/file/d/1CoCQP7qyvkSQ4VG9L_PTWD5AF9wF28XT/view)). You could examine your framework candidates and the proposed architecture, and study how most solutions out there perform, e.g. with server-side rendering or client-side rendering, on average. + +Baseline performance cost matters. According to a [study by Ankur Sethi](https://blog.uncommon.is/the-baseline-costs-of-javascript-frameworks-f768e2865d4a), "your React application will never load faster than about 1.1 seconds on an average phone in India, no matter how much you optimize it. Your Angular app will always take at least 2.7 seconds to boot up. The users of your Vue app will need to wait at least 1 second before they can start using it." You might not be targeting India as your primary market anyway, but users accessing your site with suboptimal network conditions will have a comparable experience. In exchange, your team gains maintainability and developer efficiency, of course. But this consideration needs to be deliberate. + +You could go as far as evaluating a framework (or any JavaScript library) on Sacha Greif’s [12-point scale scoring system](https://medium.freecodecamp.org/the-12-things-you-need-to-consider-when-evaluating-any-new-javascript-library-3908c4ed3f49) by exploring features, accessibility, stability, performance, package ecosystem, community, learning curve, documentation, tooling, track record, team, compatibility, security for example. But on a tough schedule, it’s a good idea to consider _at least_ the total cost on size + initial parse times before choosing an option; lightweight options such as [Preact](https://github.com/developit/preact), [Inferno](https://github.com/infernojs/inferno), [Vue](https://vuejs.org/), [Svelte](https://svelte.technology/) or [Polymer](https://github.com/Polymer/polymer) can get the job done just fine. The size of your baseline will define the constraints for your application’s code. + +A good starting point is to choose a good default stack for your application. [Gatsby.js](http://gatsbyjs.org/) (React), [Preact CLI](https://github.com/developit/preact-cli), and [PWA Starter Kit](https://github.com/Polymer/pwa-starter-kit) provide reasonable defaults for fast loading out of the box on average mobile hardware. + +[![JavaScript processing times in 2018 by Addy Osmani](https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_auto/w_400/https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/53363a80-48ae-4f91-aed0-69d292e6d7a2/2018-js-processing-times.png)](https://medium.com/@addyosmani/the-cost-of-javascript-in-2018-7d8950fbb5d4) + +(Image credit: [Addy Osmani](https://medium.com/@addyosmani/the-cost-of-javascript-in-2018-7d8950fbb5d4)) ([Large preview](https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/53363a80-48ae-4f91-aed0-69d292e6d7a2/2018-js-processing-times.png)) + +#### 13. Consider using PRPL pattern and app shell architecture + +Different frameworks will have different effects on performance and will require different strategies of optimization, so you have to clearly understand all of the nuts and bolts of the framework you’ll be relying on. When building a web app, look into the [PRPL pattern](https://developers.google.com/web/fundamentals/performance/prpl-pattern/) and [application shell architecture](https://developers.google.com/web/updates/2015/11/app-shell). The idea is quite straightforward: Push the minimal code needed to get interactive for the initial route to render quickly, then use service worker for caching and pre-caching resources and then lazy-load routes that you need, asynchronously. + +[![PRPL Pattern in the application shell architecture](https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_auto/w_400/https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/bb4716e5-d25b-4b80-b468-f28d07bae685/app-build-components-dibweb-c-scalew-879-opt.png)](https://developers.google.com/web/fundamentals/performance/prpl-pattern/) + +[PRPL](https://developers.google.com/web/fundamentals/performance/prpl-pattern/) stands for Pushing critical resource, Rendering initial route, Pre-caching remaining routes and Lazy-loading remaining routes on demand. + +[![Application shell architecture](https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_auto/w_400/https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/6423db84-4717-4aeb-9174-7ae96bf4f3aa/appshell-1-o0t8qd-c-scalew-799-opt.jpg)](https://developers.google.com/web/updates/2015/11/app-shell) + +An [application shell](https://developers.google.com/web/updates/2015/11/app-shell) is the minimal HTML, CSS, and JavaScript powering a user interface. + +#### 14. Have you optimized the performance of your APIs? + +APIs are communication channels for an application to expose data to internal and third-party applications via so-called _endpoints_. When [designing and building an API](https://www.smashingmagazine.com/2012/10/designing-javascript-apis-usability/), we need a reasonable protocol to enable the communication between the server and third-party requests. [Representational State Transfer](https://www.smashingmagazine.com/2018/01/understanding-using-rest-api/) ([_REST_](http://web.archive.org/web/20130116005443/http://tomayko.com/writings/rest-to-my-wife)) is a well-established, logical choice: it defines a set of constraints that developers follow to make content accessible in a performant, reliable and scalable fashion. Web services that conform to the REST constraints, are called _RESTful web services_. + +As with good ol' HTTP requests, when data is retrieved from an API, any delay in server response will propagate to the end user, hence delaying rendering. When a resource wants to retrieve some data from an API, it will need to request the data from the corresponding endpoint. A component that renders data from several resources, such as an article with comments and author photos in each comment, may need several roundtrips to the server to fetch all the data before it can be rendered. Furthermore, the amount of data returned through REST is often more than what is needed to render that component. + +If many resources require data from an API, the API might become a performance bottleneck. [GraphQL](https://graphql.org/) provides a performant solution to these issues. Per se, GraphQL is a query language for your API, and a server-side runtime for executing queries by using a type system you define for your data. Unlike REST, GraphQL can retrieve all data in a single request, and the response will be exactly what is required, without _over_ or _under_-fetching data as it typically happens with REST. + +In addition, because GraphQL is using schema (metadata that tells how the data is structured), it can already organize data into the preferred structure, so, for example, [with GraphQL, we could remove JavaScript code used for dealing with state management](https://hackernoon.com/how-graphql-replaces-redux-3fff8289221d), producing a cleaner application code that runs faster on the client. + +If you want to get started with GraphQL, Eric Baer published two fantastic articles on yours truly Smashing Magazine: [A GraphQL Primer: Why We Need A New Kind Of API](https://www.smashingmagazine.com/2018/01/graphql-primer-new-api-part-1/) and [A GraphQL Primer: The Evolution Of API Design](https://www.smashingmagazine.com/2018/01/graphql-primer-new-api-part-2/) (_thanks for the hint, Leonardo!_). + +[![Hacker Noon](https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_auto/w_400/https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/5fda8d85-1151-4d0b-b2f6-da354ebae345/redux-rest-apollo-graphql.png)](https://hackernoon.com/how-graphql-replaces-redux-3fff8289221d) + +A difference between REST and GraphQL, illustrated via a conversation between Redux + REST on the left, an Apollo + GraphQL on the right. (Image source: [Hacker Noon](https://hackernoon.com/how-graphql-replaces-redux-3fff8289221d)) ([Large preview](https://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/5fda8d85-1151-4d0b-b2f6-da354ebae345/redux-rest-apollo-graphql.png)) + +#### 15. Will you be using AMP or Instant Articles? + +Depending on the priorities and strategy of your organization, you might want to consider using Google’s [AMP](https://www.ampproject.org/) or Facebook’s [Instant Articles](https://instantarticles.fb.com/) or Apple’s [Apple News](https://www.apple.com/news/). You can achieve good performance without them, but AMP _does_ provide a solid performance framework with a free content delivery network (CDN), while Instant Articles will boost your visibility and performance on Facebook. + +The seemingly obvious benefit of these technologies for users is _guaranteed performance_, so at times they might even prefer AMP-/Apple News/Instant Pages-links over "regular" and potentially bloated pages. For content-heavy websites that are dealing with a lot of third-party content, these options could potentially help speed up render times dramatically. + +[Unless they don't.](https://timkadlec.com/remembers/2018-03-19-how-fast-is-amp-really/) According to Tim Kadlec, for example, "AMP documents tend to be faster than their counterparts, but they don’t necessarily mean a page is performant. AMP is not what makes the biggest difference from a performance perspective." + +A benefit for the website owner is obvious: discoverability of these formats on their respective platforms and [increased visibility in search engines](https://ethanmarcotte.com/wrote/ampersand/). You could build [progressive web AMPs](https://www.smashingmagazine.com/2016/12/progressive-web-amps/), too, by reusing AMPs as a data source for your PWA. Downside? Obviously, a presence in a walled garden places developers in a position to produce and maintain a separate version of their content, and in case of Instant Articles and Apple News [without actual URLs](https://www.w3.org/blog/TAG/2017/07/27/distributed-and-syndicated-content-whats-wrong-with-this-picture/) _(thanks Addy, Jeremy!)_. + +#### 16. Choose your CDN wisely + +Depending on how much dynamic data you have, you might be able to "outsource" some part of the content to a [static site generator](https://www.smashingmagazine.com/2015/11/static-website-generators-jekyll-middleman-roots-hugo-review/), pushing it to a CDN and serving a static version from it, thus avoiding database requests. You could even choose a [static-hosting platform](https://www.smashingmagazine.com/2015/11/modern-static-website-generators-next-big-thing/) based on a CDN, enriching your pages with interactive components as enhancements ([JAMStack](https://jamstack.org/)). In fact, some of those generators (like [Gatsby](https://www.gatsbyjs.org/blog/2017-09-13-why-is-gatsby-so-fast/) on top of React) are actually [website compilers](https://tomdale.net/2017/09/compilers-are-the-new-frameworks/) with many automated optimizations provided out of the box. As compilers add optimizations over time, the compiled output gets smaller and faster over time. + +Notice that CDNs can serve (and offload) dynamic content as well. So, restricting your CDN to static assets is not necessary. Double-check whether your CDN performs compression and conversion (e.g. image optimization in terms of formats, compression and resizing at the edge), [support for servers workers](https://www.filamentgroup.com/lab/servers-workers.html), edge-side includes, which assemble static and dynamic parts of pages at the CDN’s edge (i.e. the server closest to the user), and other tasks. + +Note: based on research by Patrick Meenan and Andy Davies, HTTP/2 is [effectively broken on many CDNs](https://github.com/andydavies/http2-prioritization-issues#cdns--cloud-hosting-services), so we shouldn’t be too optimistic about the performance boost there. + +> [译] [2019 前端性能优化年度总结 — 第一部分](https://github.com/xitu/gold-miner/blob/master/TODO1/front-end-performance-checklist-2019-pdf-pages-1.md) +> **[译] [2019 前端性能优化年度总结 — 第二部分](https://github.com/xitu/gold-miner/blob/master/TODO1/front-end-performance-checklist-2019-pdf-pages-2.md)** +> [译] [2019 前端性能优化年度总结 — 第三部分](https://github.com/xitu/gold-miner/blob/master/TODO1/front-end-performance-checklist-2019-pdf-pages-3.md) +> [译] [2019 前端性能优化年度总结 — 第四部分](https://github.com/xitu/gold-miner/blob/master/TODO1/front-end-performance-checklist-2019-pdf-pages-4.md) +> [译] [2019 前端性能优化年度总结 — 第五部分](https://github.com/xitu/gold-miner/blob/master/TODO1/front-end-performance-checklist-2019-pdf-pages-5.md) +> [译] [2019 前端性能优化年度总结 — 第六部分](https://github.com/xitu/gold-miner/blob/master/TODO1/front-end-performance-checklist-2019-pdf-pages-6.md) + +> 如果发现译文存在错误或其他需要改进的地方,欢迎到 [掘金翻译计划](https://github.com/xitu/gold-miner) 对译文进行修改并 PR,也可获得相应奖励积分。文章开头的 **本文永久链接** 即为本文在 GitHub 上的 MarkDown 链接。 + + +--- + +> [掘金翻译计划](https://github.com/xitu/gold-miner) 是一个翻译优质互联网技术文章的社区,文章来源为 [掘金](https://juejin.im) 上的英文分享文章。内容覆盖 [Android](https://github.com/xitu/gold-miner#android)、[iOS](https://github.com/xitu/gold-miner#ios)、[前端](https://github.com/xitu/gold-miner#前端)、[后端](https://github.com/xitu/gold-miner#后端)、[区块链](https://github.com/xitu/gold-miner#区块链)、[产品](https://github.com/xitu/gold-miner#产品)、[设计](https://github.com/xitu/gold-miner#设计)、[人工智能](https://github.com/xitu/gold-miner#人工智能)等领域,想要查看更多优质译文请持续关注 [掘金翻译计划](https://github.com/xitu/gold-miner)、[官方微博](http://weibo.com/juejinfanyi)、[知乎专栏](https://zhuanlan.zhihu.com/juejinfanyi)。 From d31f2493fd9e04aa819a6d22eec0a10ff1aa9dec Mon Sep 17 00:00:00 2001 From: Starrier <1342878298@qq.com> Date: Sun, 13 Jan 2019 13:07:28 +0800 Subject: [PATCH 46/54] =?UTF-8?q?=E5=88=A9=E7=94=A8=20Python=E4=B8=AD?= =?UTF-8?q?=E7=9A=84=20Bokeh=20=E5=AE=9E=E7=8E=B0=E6=95=B0=E6=8D=AE?= =?UTF-8?q?=E5=8F=AF=E8=A7=86=E5=8C=96=EF=BC=8C=E7=AC=AC=E4=B8=80=E9=83=A8?= =?UTF-8?q?=E5=88=86=EF=BC=9A=E5=85=A5=E9=97=A8=20(#4945)?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * Starrier:data visulization-1 * 译文格式修正 --- ...okeh-in-python-part-one-getting-started.md | 151 +++++++++--------- 1 file changed, 75 insertions(+), 76 deletions(-) diff --git a/TODO1/data-visualization-with-bokeh-in-python-part-one-getting-started.md b/TODO1/data-visualization-with-bokeh-in-python-part-one-getting-started.md index 1ae36d5f232..a2d70cc6b1f 100644 --- a/TODO1/data-visualization-with-bokeh-in-python-part-one-getting-started.md +++ b/TODO1/data-visualization-with-bokeh-in-python-part-one-getting-started.md @@ -2,67 +2,66 @@ > * 原文作者:[Will Koehrsen](https://towardsdatascience.com/@williamkoehrsen?source=post_header_lockup) > * 译文出自:[掘金翻译计划](https://github.com/xitu/gold-miner) > * 本文永久链接:[https://github.com/xitu/gold-miner/blob/master/TODO1/data-visualization-with-bokeh-in-python-part-one-getting-started.md](https://github.com/xitu/gold-miner/blob/master/TODO1/data-visualization-with-bokeh-in-python-part-one-getting-started.md) -> * 译者: -> * 校对者: +> * 译者:[Starriers](https://github.com/Starriers) -# Data Visualization with Bokeh in Python, Part I: Getting Started +# 用 Python 中的 Bokeh 实现可视化数据,第一部分:入门 -**Elevate your visualization game** +**提升你的可视化游数据** -The most sophisticated statistical analysis can be meaningless without an effective means for communicating the results. This point was driven home by a recent experience I had on my research project, where we use [data science to improve building energy efficiency](https://arpa-e.energy.gov/?q=slick-sheet-project/virtual-building-energy-audits). For the past several months, one of my team members has been working on a technique called [wavelet transforms](http://disp.ee.ntu.edu.tw/tutorial/WaveletTutorial.pdf) which is used to analyze the frequency components of a time-series. The method achieves positive results, but she was having trouble explaining it without getting lost in the technical details. +如果没有有效的方法来传达结果,那么再复杂的统计分析也毫无意义。这一点我在最近的研究项目中深有体会,我们使用[数据科学来提高建筑能效](https://arpa-e.energy.gov/?q=slick-sheet-project/virtual-building-energy-audits)。在过去的几个月里,我团队成员中的一个人一直致力于研究一种叫做 [wavelet transforms](http://disp.ee.ntu.edu.tw/tutorial/WaveletTutorial.pdf),用于分析时间序列频率成分的技术。该方法取得了积极的效果,但她在解释过程中遇到了困难,所幸的是,她没有迷失在技术细节中。 -Exasperated, she asked me if I could make a visual showing the transformation. In a couple minutes using an R package called `gganimate`, I made a simple animation showing how the method transforms a time-series. Now, instead of struggling to explain wavelets, my team member can show the clip to provide an intuitive idea of how the technique works. My conclusion was we can do the most rigorous analysis, but at the end of the day, all people want to see is a gif! While this statement is meant to be humorous, it has an element of truth: results will have little impact if they cannot be clearly communicated, and often the best way for presenting the results of an analysis is with visualizations. +她很愤怒,问我能否用视觉表达来说明这种变换。我使用了叫做 `gganimate` 的 R 包,在几分钟之内制作了一个简单的动画,展示了该方法是如何转换时间序列的。现在,我的团队成员可以用这个让人直观地了解技术是如何工作的东西来取代费劲的语言描述。我的结论是,我们可以做最严格的分析,但在一天结束时,所有人都想看到的是一个 gif!虽然说这话是开玩笑,但它蕴含着一个道理:不能清楚地表达结果,就会对结果产生影响,而数据可视化通常是展示分析结果的最佳方法。 -The resources available for data science are advancing rapidly which is especially pronounced in the [realm of visualization](https://codeburst.io/overview-of-python-data-visualization-tools-e32e1f716d10) where it seems there is another option to try every week. With all these advances there is one common trend: increased interactivity. People like to see data in static graphs but what they enjoy even more is playing with the data to see how changing parameters affects the results. With regards to my research, a report telling a building owner how much electricity they can save by changing their AC schedule is nice, but it’s more effective to give them an interactive graph where they can choose different schedules and see how their choice affects electricity consumption. Recently, inspired by the trend towards interactive plots and a desire to keep learning new tools, I have been working with [Bokeh](https://bokeh.pydata.org/en/latest/), a Python library. An example of the interactive capabilities of Bokeh are shown in this dashboard I built for my research project: +可用于数据科学的资源正在迅速增加,在[可视化领域](https://codeburst.io/overview-of-python-data-visualization-tools-e32e1f716d10)中尤为明显,似乎每周都有一种新的尝试。随着这些技术的进步,它们逐渐出现了一个共同的趋势:增加交互性。人们喜欢在静态图中查看数据,但他们更喜欢的是使用数据,并利用这些数据来查看参数的变化对结果的影响。在我的研究中,有一份报告是用来告诉业主通过改变他们的空调使用时间可以节省下多少度电,但如果给他们一个可以交互的表,他们就可以自己选择不同的时间表,来观察不用时间是如何影响用电的,这种方式更加有效。最近,受交互式绘图趋势的启发,以及对不断学习新工具的渴望,我一直在学习使用一个叫做 [Bokeh](https://bokeh.pydata.org/en/latest/) 的 Python 库。我为我的研究项目构建的仪表盘中显示了 Bokeh 交互功能的一个示例: ![](https://cdn-images-1.medium.com/max/800/1*nN5-hITqzDlhelSJ2W9x5g.gif) -While I can’t share the code behind this project, I can walk through an example of building a fully-interactive Bokeh application using publicly available data. This series of articles will cover the entire process of creating an application using Bokeh. For this first post, we’ll cover the basic elements of Bokeh, which we’ll build upon in subsequent posts. Throughout this series, we’ll be working with the [nycflights13 dataset](https://cran.r-project.org/web/packages/nycflights13/nycflights13.pdf), which has records of over 300,000 flights from 2013. We will first concentrate on visualizing a single variable, in this case the arrival delay of flights in minutes and we’ll start by constructing a basic histogram, a classic method for display the spread and location of one continuous variable. The [full code is accessible on GitHub](https://github.com/WillKoehrsen/Bokeh-Python-Visualization) and the first Jupyter notebook can be found [here](https://github.com/WillKoehrsen/Bokeh-Python-Visualization/blob/master/intro/exploration/first_histogram.ipynb). This post focuses on the visuals so I encourage anyone to check out the code if they want to see the unglamorous, but necessary steps of data cleaning and formatting! +尽管我无法共享这个项目的整个代码,但我可以通过使用公开可用数据构建完全交互的 Bokeh 应用程序的示例。本系列文章将介绍使用 Bokeh 创建应用程序的整个过程。对于第一篇文章,我们将介绍 Bokeh 的基本元素,我们将在以后的文章中对其进行构建,在本系列文章中,我们将使用 [nycflights13 数据集](https://cran.r-project.org/web/packages/nycflights13/nycflights13.pdf),该数据集有 2013 年以来超过 30 万次航班的记录。我们首先将重点放在可视化单个变量上,在这种情况下,航班的延迟到达以分钟为单位,我们将从构造一个基本的柱状图开始,这是显示一个连续变量的扩展和位置的经典方法。[完整的代码可以在 GitHub 查看](https://github.com/WillKoehrsen/Bokeh-Python-Visualization),第一个 Jupyter notebook 可以在[这里](https://github.com/WillKoehrsen/Bokeh-Python-Visualization/blob/master/intro/exploration/first_histogram.ipynb)看到。这篇文章关注的是视觉效果,所以我鼓励任何人查看代码,如果他们想看到无聊但又必不可少数据清洗和格式化的步骤! -### Basics of Bokeh +### Bokeh 基础 -The major concept of Bokeh is that graphs are built up one layer at a time. We start out by creating a figure, and then we add elements, called [glyphs](https://bokeh.pydata.org/en/latest/docs/user_guide/plotting.html), to the figure. (For those who have used ggplot, the idea of glyphs is essentially the same as that of geoms which are added to a graph one ‘layer’ at a time.) Glyphs can take on many shapes depending on the desired use: circles, lines, patches, bars, arcs, and so on. Let’s illustrate the idea of glyphs by making a basic chart with squares and circles. First, we make a plot using the `figure` method and then we append our glyphs to the plot by calling the appropriate method and passing in data. Finally, we show our plot (I’m using a Jupyter Notebook which lets you see the plots right below the code if you use the `output_notebook` call). +Bokeh 的主要概念是一次建立一个图层。我们首先创建一个图,然后向图中添加名为 [glyphs](https://bokeh.pydata.org/en/latest/docs/user_guide/plotting.html) 的元素。(对于那些使用 ggplot 的人来说,glyphs 的概念与地理符号的想法本质上是一样的,他们一次添加到一个“图层”中。)根据所需的用途,glyphs 可以呈现多种形状:圆形、线条、补丁、条形、弧形等。让我们用正方形和圆形制作一个基本的图来说明 glyphs 的概念。首先,我们使用 `figure` 方法绘制一个图,然后通过调用适当的方法传入数据,将我们的 glyphs 添加到绘图中。最后,我们展示绘图(我使用的是 Jupyter Notebook,如果你使用时调用的是 `output_notebook`,就会看到对应的绘图)。 -``` -# bokeh basics +```Python +# bokeh 基础 from bokeh.plotting import figure from bokeh.io import show, output_notebook -# Create a blank figure with labels +# 创建带标签的空白图 p = figure(plot_width = 600, plot_height = 600, title = 'Example Glyphs', x_axis_label = 'X', y_axis_label = 'Y') -# Example data +# 示例数据 squares_x = [1, 3, 4, 5, 8] squares_y = [8, 7, 3, 1, 10] circles_x = [9, 12, 4, 3, 15] circles_y = [8, 4, 11, 6, 10] -# Add squares glyph +# 添加方形 glyph p.square(squares_x, squares_y, size = 12, color = 'navy', alpha = 0.6) -# Add circle glyph +# 添加圆形 glyph p.circle(circles_x, circles_y, size = 12, color = 'red') -# Set to output the plot in the notebook +# 设置为在笔记本中输出情节 output_notebook() -# Show the plot +# 显示绘图 show(p) ``` -This generates the slightly uninspiring plot below: +这就形成了下面略显平淡的绘图: ![](https://cdn-images-1.medium.com/max/800/1*fGSBddMUbg_N--xbBOdUOg.png) -While we could have easily made this chart in any plotting library, we get a few tools for free with any Bokeh plot which are on the right side and include panning, zooming, selection, and plot saving abilities. These tools are configurable and will come in handy when we want to investigate our data. +尽管在任何绘制图库中,我们都可以很容易地制作这个图表,但我们可以免费获取一些工具,其中包含位于右侧的 Bokeh 绘图,包括 panning,缩放和绘图保存功能。这些工具是可配置的,当我们想研究我们的数据时,这些工具会派上用场。 -Let’s now get to work on showing our flight delay data. Before we can jump right into the graph, we should load in the data and give it a brief inspection (**bold** is code output): +我们现在开始展示我们的航班延迟数据。在跳转到图形之前,我们应该加载数据并对其进行简短的检查(**粗体** 为输出代码): -``` -# Read the data from a csv into a dataframe +```Python +# 将 CSV 中的数据读入 flights = pd.read_csv('../data/flights.csv', index_col=0) -# Summary stats for the column of interest +# 兴趣栏的统计数据汇总 flights['arr_delay'].describe() count 327346.000000 @@ -75,13 +74,13 @@ min -86.000000 max 1272.000000 ``` -The summary stats give us information to inform our plotting decisions: we have 327,346 flights, with a minimum delay of -86 minutes (meaning the flight was early by 86 minutes) and a maximum delay of 1272 minutes, an astounding 21 hours! The 75% quantile is only at 14 minutes, so we can assume that numbers over 1000 minutes are likely outliers (which does not mean they are illegitimate, just extreme). I will focus on delays between -60 minutes and +120 minutes for our histogram. +摘要统计数据为我们作出决策提供了信息:我们有 327、346 次航班,最小延迟事件为 -86 分钟,最大延迟事件为 1272 分钟,令人震惊的 21 小时!75% 的分位数只有 14 分钟,所以我们可以假设 1000 分钟以上的数字可能是异常值(这并不意味着它们是非法的,只是极端的)。我会集中讨论 -60 到 120 分钟的延迟柱状图。 -A [histogram](https://www.moresteam.com/toolbox/histogram.cfm) is a common choice for an initial visualization of a single variable because it shows the distribution of the data. The x-position is the value of the variable grouped into intervals called bins, and the height of each bar represents the count (number) of data points in each interval. In our case, the x-position will represent the arrival delay in minutes and the height is the number of flights in the corresponding bin. Bokeh does not have a built-in histogram glyph, but we can make our own using the `quad` glyph which allows us to specify the bottom, top, left, and right edges of each bar. +[柱状图](https://www.moresteam.com/toolbox/histogram.cfm)是单个变量初始可视化的常见选择,因为它显示了分布式数据。x 位置是将变量分组成成为 bin 的间隔的值,每个条形的高度表示每个间隔数据点的计数(数目)。在我们的例子中,x 位置将代表以分钟为单位的延迟到达,高度是对应的 bin 中的航班数。Bokeh 没有内置的柱状图,但我们可以使用 `quad` glyph 来指定每个条形的底部、上、下、和右边距。 -To create the data for the bars, we will use the [numpy](https://docs.scipy.org/doc/numpy-1.14.0/reference/generated/numpy.histogram.html) `[histogram](https://docs.scipy.org/doc/numpy-1.14.0/reference/generated/numpy.histogram.html)` [function](https://docs.scipy.org/doc/numpy-1.14.0/reference/generated/numpy.histogram.html) which calculates the number of data points in each specified bin. We will use bins of 5 minute length which means the function will count the number of flights in each five minute delay interval. After generating the data, we put it in a [pandas dataframe to keep all the data in one object.](https://pandas.pydata.org/pandas-docs/stable/dsintro.html) The code here is not crucial for understanding Bokeh, but it’s useful nonetheless because of the prevalence of numpy and pandas in data science! +要创建条形图的数据,我们要使用 [numpy](https://docs.scipy.org/doc/numpy-1.14.0/reference/generated/numpy.histogram.html)、`[histogram](https://docs.scipy.org/doc/numpy-1.14.0/reference/generated/numpy.histogram.html)`、[function](https://docs.scipy.org/doc/numpy-1.14.0/reference/generated/numpy.histogram.html),它计算每个指定 bin 数据点的数值。我们使用 5 分钟的长度作为函数将计算航班数在每五分钟所花费的时间延误。在生成数据之后,我们将其放入一个 [pandas dataframe 来将所有的数据保存在一个对象中](https://pandas.pydata.org/pandas-docs/stable/dsintro.html)。这里的代码对于理解 Bokeh 并不是很重要,但鉴于 Numpy 和 pandas 在数据科学中的流行度,所以它还是有些用处的。 -``` +```Python """Bins will be five minutes in width, so the number of bins is (length of interval / 5). Limit delays to [-60, +120] minutes using the range.""" @@ -89,150 +88,150 @@ arr_hist, edges = np.histogram(flights['arr_delay'], bins = int(180/5), range = [-60, 120]) -# Put the information in a dataframe +# 将信息放入 dataframe delays = pd.DataFrame({'arr_delay': arr_hist, 'left': edges[:-1], 'right': edges[1:]}) ``` -Our data looks like this: +我们的数据看起来像这样: ![](https://cdn-images-1.medium.com/max/800/1*JSiAY3RSGOhur9agdzgEYQ.png) -The `flights`column is the count of the number of flights within each delay interval from `left`to `right`. From here, we can make a new Bokeh figure and add a quad glpyh specifying the appropriate parameters: +`flights` 列是从 `left` 到 `right` 的每个延迟间隔内飞行次数的计数。在这里,我们可以生成一个新的 Bokeh 图,并添加一个指定适当参数的 quad glpyh: -``` -# Create the blank plot +```Python +# 创建空白绘图 p = figure(plot_height = 600, plot_width = 600, title = 'Histogram of Arrival Delays', x_axis_label = 'Delay (min)]', y_axis_label = 'Number of Flights') -# Add a quad glyph +# 添加一个 quad glphy p.quad(bottom=0, top=delays['flights'], left=delays['left'], right=delays['right'], fill_color='red', line_color='black') -# Show the plot +# 显示绘图 show(p) ``` ![](https://cdn-images-1.medium.com/max/800/1*afCD1sc8mNPYrZ2kh2jfxg.png) -Most of the work in producing this graph comes in the data formatting which is not an unusual occurrence in data science! From our plot, we see that arrival delays are nearly normally distributed with a [slight positive skew or heavy tail on the right side.](http://www.statisticshowto.com/probability-and-statistics/skewed-distribution/) +生成此图的大部分工作都是在数据格式化过程中进行的,这在数据科学中并不常见!从我们的绘图中可以看出,延迟到达几乎是正态分布的,[右侧有一个轻微的正斜度或重尾巴](http://www.statisticshowto.com/probability-and-statistics/skewed-distribution/)。 -There are easier ways to create a basic histogram in Python, and the same result could be done using a few lines of `[matplotlib](https://en.wikipedia.org/wiki/Matplotlib)`. However, the payoff in the development required for a Bokeh plot comes in the tools and ways to interact with the data that we can now easily add to the graph. +有更简单的方法可以在 Python 中创建柱状图,也可以使用几行 `[matplotlib](https://en.wikipedia.org/wiki/Matplotlib)` 来获取相同的结果。但是,Bokeh 绘图所带来的的开发的好处在于,它可以提供将数据交互轻松地添加到图形中的工具和方法。 -### Adding Interactivity +### 添加交互性 -The first type of interactivity we will cover in this series is passive interactions. These are actions the viewer can take which do not alter the data displayed. These are referred to as [inspectors](https://bokeh.pydata.org/en/latest/docs/reference/models/tools.html) because they allow viewers to “investigate” the data in more detail . A useful inspector is the tooltip which appears when a user mouses over data points and is called the [HoverTool in Bokeh](https://bokeh.pydata.org/en/latest/docs/user_guide/tools.html). +我们将在本系列中讨论的第一类交互是被动交互。这些是拥护可以采取的不改变显示数据的操作。它们被称为 [inspectors](https://bokeh.pydata.org/en/latest/docs/reference/models/tools.html),因为他们允许用户查看更详细的“调查”数据。有用的 inspector 是当用户鼠标在数据点上移动并调用 [Bokeh 中的悬停工具](https://bokeh.pydata.org/en/latest/docs/user_guide/tools.html)时,会出现工具提示。 ![](https://cdn-images-1.medium.com/max/800/1*3A33DOx2NL0h53SfsgPrzg.png) -A basic Hover tooltip +基础的悬停工具提示 -In order to add tooltips, we need to change our data source from a dataframe to a [ColumnDataSource, a key concept in Bokeh.](https://bokeh.pydata.org/en/latest/docs/reference/models/sources.html) This is an object specifically used for plotting that includes data along with several methods and attributes. The ColumnDataSource allows us to add annotations and interactivity to our graphs, and can be constructed from a pandas dataframe. The actual data itself is held in a dictionary accessible through the data attribute of the ColumnDataSource. Here, we create the source from our dataframe and look at the keys of the data dictionary which correspond to the columns of our dataframe. +为了添加工具提示,我们需要将数据源从 dataframe 中更改为来自 [ColumnDataSource,Bokeh 中的一个关键概念。](https://bokeh.pydata.org/en/latest/docs/reference/models/sources.html)这是一个专门用于绘图的对象,它包含数据以及方法和属性。ColumnDataSource 允许我们在图中添加注解和交互,也可以从 pandas dataframe 中进行构建。真实数据被保存在字典中,可以通过 ColumnDataSource 的 data 属性访问。这里,我们从数据源进行创建源,并查看数据字典中与 dataframe 列对应的键。 -``` -# Import the ColumnDataSource class +```Python +# 导入 ColumnDataSource 类 from bokeh.models import ColumnDataSource -# Convert dataframe to column data source +# 将 dataframe 转换为 列数据源 src = ColumnDataSource(delays) src.data.keys() dict_keys(['flights', 'left', 'right', 'index']) ``` -When we add glyphs using a ColumnDataSource, we pass in the ColumnDataSource as the `source` parameter and refer to the column names using strings: +我们使用 CloumDataSource 添加 glyphs 时,我们将 CloumnDataSource 作为 `source` 参数传入,并使用字符串引用列名: -``` -# Add a quad glyph with source this time +```Python +# 这次添加一个带有源的 quad glyph p.quad(source = src, bottom=0, top='flights', left='left', right='right', fill_color='red', line_color='black') ``` -Notice how the code refers to the specific data columns, such as ‘flights’, ‘left’, and ‘right’ by a single string instead of the `df['column']` format as before. +请注意,代码如何引用特定的数据列,比如 ‘flights’、‘left’ 和 ‘right’,而不是像以前那样使用 `df['column']` 格式。 -#### HoverTool in Bokeh +#### Bokeh 中的 HoverTool -The syntax of a HoverTool may seem a little convoluted at first, but with practice they are quite easy to create. We pass our `HoverTool`instance a list of `tooltips` as [Python tuples](https://www.tutorialspoint.com/python/python_tuples.htm) where the first element is the label for the data and the second references the specific data we want to highlight. We can reference either attributes of the graph, such as x or y position using ‘$’ or specific fields in our source using ‘@’. That probably sounds a little confusing so here’s an example of a HoverTool where we do both: +一开始,HoverTool 的语法看上去会有些复杂,但经过实践后,就会发现它们很容易创建。我们将 `HoverTool` 实例作为 `tooltips` 作为 [Python 元组](https://www.tutorialspoint.com/python/python_tuples.htm)传递给它,其中第一个元素是数据的标签,第二个元素引出我们要高亮显示的特定数据。我们可以使用 ‘$’ 引用图中任何属性,例如 x 或 y 的位置,也可以使用 ‘@’ 引用源中特定字段。这听起来可能有点令人困惑,所以这里有一个 HoverTool 的例子,我们在这两方面都可以这么做: -``` -# Hover tool referring to our own data field using @ and -# a position on the graph using $ +```Python +# 使用 @ 引用我们自己的数据字段 +# 使用 $ 在图上的位置悬停工具 h = HoverTool(tooltips = [('Delay Interval Left ', '@left'), ('(x,y)', '($x, $y)')]) ``` -Here, we reference the `left` data field in the ColumnDataSource (which corresponds to the ‘left’ column of the original dataframe) using ‘@’ and we reference the (x,y) position of the cursor using ‘$’. The result is below: +这里,我们使用 ‘@’ 引用 ColumnDataSource(它对应于原始 dataframe 的 ‘left’ 列)中的 `left` 数据字段,并使用 ‘$’ 引用光标的 (x,y) 位置。结果如下: ![](https://cdn-images-1.medium.com/max/800/1*fLiHCLkN15ZhCH9fk7GMXg.png) -Hover tooltip showing different data references +显示不同数据引用的悬停工具提示 -The (x,y) position is that of the mouse on the graph and is not very helpful for our histogram, because we to find the find the number of flights in a given bar which corresponds to the top of the bar. To fix that we will alter our tooltip instance to refer to the correct column. Formatting the data shown in a tooltip can be frustrating, so I usually create another column in my dataframe with the correct formatting. For example, if I want my tooltip to show the entire interval for a given bar, I create a formatted column in my dataframe: +(x,y) 位置上是鼠标的位置,对我们的柱状图没有太大的帮助,因为我们要找到给定条形中对应于条形顶部的飞行术。为了修复这个问题,我们将要修改我们的工具提示实例来引用正确的列。格式化工具提示中的数据显示可能会让人沮丧,因此我通常在 dataframe 中使用正确的格式创建另一列。例如,如果我希望我的工具提示显示给定条的整个隔间,我会在数据框中创建一个格式化列: -``` -# Add a column showing the extent of each interval +```Python +# 添加一个列,显示每个间隔的范围 delays['f_interval'] = ['%d to %d minutes' % (left, right) for left, right in zip(delays['left'], delays['right'])] ``` -Then I convert this dataframe into a ColumnDataSource and access this column in my HoverTool call. The following code creates the plot with a hover tool referring to two formatted columns and adds the tool to the plot: +然后,我将 dataframe 转换为 CloumnDataSource,并在 HoverTool 调用中访问该列。下面的代码使用引用两个格式化列的悬停工具创建绘图,把那个将该工具添加到绘图中。 -``` -# Create the blank plot +```Python +# 创建一个空白绘图 p = figure(plot_height = 600, plot_width = 600, title = 'Histogram of Arrival Delays', x_axis_label = 'Delay (min)]', y_axis_label = 'Number of Flights') -# Add a quad glyph with source this time +# 这次,添加带有源的 quad glyph p.quad(bottom=0, top='flights', left='left', right='right', source=src, fill_color='red', line_color='black', fill_alpha = 0.75, hover_fill_alpha = 1.0, hover_fill_color = 'navy') -# Add a hover tool referring to the formatted columns +# 添加引用格式化列的悬停工具 hover = HoverTool(tooltips = [('Delay', '@f_interval'), ('Num of Flights', '@f_flights')]) -# Style the plot +# 绘图样式 p = style(p) -# Add the hover tool to the graph +# 将悬停工具添加到图中 p.add_tools(hover) -# Show the plot +# 显示绘图 show(p) ``` -In the Bokeh style, we include elements in our chart by adding them to the original figure. Notice in the `p.quad` glyph call, there are a few additional parameters, `hover_fill_alpha` and `hover_fill_color`, that change the look of the glyph when we mouse over the bar. I also added in styling using a `style` function (see the notebook for the code). Aesthetics are tedious to type, so I usually write a function that I can apply to any plot. When I use styling, I keep things simple and focus on readability of labels. The main point of a plot is to show the data, and adding unnecessary elements only [detracts from the usefulness of a figure](https://en.wikipedia.org/wiki/Chartjunk)! The final plot is presented below: +在 Bokeh 样式中,我们以添加元素至原始的图中来将元素添加到表中。请注意,在 `p.quad` glyph 调用中,有几个额外的参数 `hover_fill_alpha` 和 `hover_fill_color`,当我们的鼠标移动到条图形时,这些参数会改变 glyph 的样式。我还添加了 `style` 函数(可在笔记中查看相关代码)。审美过程很无聊,所以通常我会写一个应用于任何绘图的函数。当我使用样式时,我会保持简单并专注于标签的可读性。绘图的主要目的是显示数据,添加不必要的元素只会[降低绘图的可用性](https://en.wikipedia.org/wiki/Chartjunk)!最后的绘图如下所示: ![](https://cdn-images-1.medium.com/max/800/1*3r9Ti_GFbByXTwamtq6jwA.png) -As we mouse over different bars, we get the precise statistics for that bar showing the interval and the number of flights within that interval. If we are proud of our plot, we can save it to an html file to share: +当我们的鼠标滑过不同的词条时,会得到该词条精确的统计数据,它表示间隔以及在该间隔内飞行的次数。如果对绘图比较满意,可以将其保存到 html 文件中进行共享: -``` -# Import savings function +```Python +# 导入保存函数 from bokeh.io import output_file -# Specify the output file and save +# 指定输出文件并保存 output_file('hist.html') show(p) ``` -### Further Steps and Conclusions +### 展望与总结 -It took me more than one plot to get the basic workflow of Bokeh so don’t worry if it seems there is a lot to learn. We’ll get plenty more practice over the course of this series! While it might seem like Bokeh is a lot of work, the benefits come when we want to extend our visuals beyond simple static figures. Once we have a basic chart, we can increase the effectiveness of the visual by adding more elements. For example, if we want to look at the arrival delay by airline, we can make an interactive chart allowing users to select and compare airlines. We will leave active interactions, those that change the data displayed, to the next post, but here’s a look at what we can do: +为了获取 Bokeh 的工作流程,我制作了很多次绘图,所以如果这看起来有很多东西要学的时候,不要担心。在本系列教程中,我们将得到更多的练习!虽然Bokeh 看起来似乎有很多工作要做,但是当我们想要将我们的视觉效果扩展到简单的静态图像之外的时候,它的好处就不言而喻了。一旦我们有了基本的图,我们就可以通过增加更多的元素来提高视觉效果。例如,如果我们想查看航空公司的延迟到达,我们可以制作一个交互式图,让用户选择和比较航空公司。我们将把主动交互(那些更改显示数据的交互)留到下一篇文章中,但下面是我们目前可以做的事情: ![](https://cdn-images-1.medium.com/max/800/1*avjUF5lUF-eYGs-N7OBPOg.gif) -Active interactions require a bit more involved scripting, but that gives us a chance to work on our Python! (If anyone wants to have a look at the code for this plot before the next article, [here it is](https://github.com/WillKoehrsen/Bokeh-Python-Visualization/blob/master/interactive/histogram.py).) +主动交互需要编写更多的脚本,但这给了我们可以使用 Python 的机会!(如果有人想在下一篇文章之前看一下绘图的代码[可以在这里进行查看](https://github.com/WillKoehrsen/Bokeh-Python-Visualization/blob/master/interactive/histogram.py)。) -Throughout this series, I want to emphasize that Bokeh or any one library tool will never be a one stop tool for all your plotting needs. Bokeh is great for allowing users to explore graphs, but for other uses, like simple [exploratory data analysis,](https://en.wikipedia.org/wiki/Exploratory_data_analysis) a lightweight library such as`matplotlib`likely will be more efficient. This series is meant to show the capabilities of Bokeh to give you another plotting tool you can rely on as needed. The more libraries you know, the better equipped you will be to use the right visualization tool for the task. +在本系列文章中,我想强调的是,Boken 或者任何一个库工具永远都不会是满足所有绘图需求的一站式解决工具。Bokeh 允许用户研究绘图,但对于其他应用,像简单的[探索性数据分析](https://en.wikipedia.org/wiki/Exploratory_data_analysis),`matplotlib` 这样的轻量级库可能会更高效。本系列旨在为你提供绘图工具的另一种选择,这需要更加需求来进行抉择。你知道的库越多,就越能高效地使用可视化工具完成任务。 -As always, I welcome constructive criticism and feedback. I can be reached on Twitter [@koehrsen_will](http://twitter.com/@koehrsen_will). +我一直以来都非常欢迎那些具有建设性的批评和反馈。你们可以在 Twitter [@koehrsen_will](http://twitter.com/@koehrsen_will) 上联系到我。 > 如果发现译文存在错误或其他需要改进的地方,欢迎到 [掘金翻译计划](https://github.com/xitu/gold-miner) 对译文进行修改并 PR,也可获得相应奖励积分。文章开头的 **本文永久链接** 即为本文在 GitHub 上的 MarkDown 链接。 From 6f15adeae79ff7df72bd136224ac348a5fab5951 Mon Sep 17 00:00:00 2001 From: YueYong Date: Sat, 12 Jan 2019 23:48:25 -0600 Subject: [PATCH 47/54] =?UTF-8?q?=E5=88=A9=E7=94=A8=20Python=E4=B8=AD?= =?UTF-8?q?=E7=9A=84=20Bokeh=20=E5=AE=9E=E7=8E=B0=E6=95=B0=E6=8D=AE?= =?UTF-8?q?=E5=8F=AF=E8=A7=86=E5=8C=96=EF=BC=8C=E7=AC=AC=E4=B8=89=E9=83=A8?= =?UTF-8?q?=E5=88=86=EF=BC=9A=E5=88=B6=E4=BD=9C=E4=B8=80=E4=B8=AA=E5=AE=8C?= =?UTF-8?q?=E6=95=B4=E7=9A=84=E4=BB=AA=E8=A1=A8=E7=9B=98=20(#4940)?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * 利用 Python中的 Bokeh 实现数据可视化,第三部分:制作一个完整的仪表盘 利用 Python中的 Bokeh 实现数据可视化,第三部分:制作一个完整的仪表盘 * Update data-visualization-with-bokeh-in-python-part-iii-a-complete-dashboard.md --- ...in-python-part-iii-a-complete-dashboard.md | 88 +++++++++---------- 1 file changed, 43 insertions(+), 45 deletions(-) diff --git a/TODO1/data-visualization-with-bokeh-in-python-part-iii-a-complete-dashboard.md b/TODO1/data-visualization-with-bokeh-in-python-part-iii-a-complete-dashboard.md index 76df9b7e0d8..218a87f42c7 100644 --- a/TODO1/data-visualization-with-bokeh-in-python-part-iii-a-complete-dashboard.md +++ b/TODO1/data-visualization-with-bokeh-in-python-part-iii-a-complete-dashboard.md @@ -2,48 +2,47 @@ > * 原文作者:[Will Koehrsen](https://towardsdatascience.com/@williamkoehrsen?source=post_header_lockup) > * 译文出自:[掘金翻译计划](https://github.com/xitu/gold-miner) > * 本文永久链接:[https://github.com/xitu/gold-miner/blob/master/TODO1/data-visualization-with-bokeh-in-python-part-iii-a-complete-dashboard.md](https://github.com/xitu/gold-miner/blob/master/TODO1/data-visualization-with-bokeh-in-python-part-iii-a-complete-dashboard.md) -> * 译者: -> * 校对者: +> * 译者:[YueYong](https://github.com/YueYongDev) -# Data Visualization with Bokeh in Python, Part III: Making a Complete Dashboard +# 利用 Python中的 Bokeh 实现数据可视化,第三部分:制作一个完整的仪表盘 -**Creating an interactive visualization application in Bokeh** +**在 Bokeh 中创建交互式可视化应用程序** ![](https://cdn-images-1.medium.com/max/1000/1*wWPUyFSC0LlX960L3FTeDQ.jpeg) -Sometimes I learn a data science technique to solve a specific problem. Other times, as with Bokeh, I try out a new tool because I see some cool projects on Twitter and think: “That looks pretty neat. I’m not sure when I’ll use it, but it could come in handy.” Nearly every time I say this, I end up finding a use for the tool. Data science requires knowledge of many different skills and you never know where that next idea you will use will come from! +有时我会学习数据科学技术来解决特定问题。其他时候,我会尝试一种新工具,比如说 Bokeh,因为我在 Twitter 上看到一些很酷的项目,就会想:“那看起来很棒。虽然我不确定什么时候用,但迟早会有用的。”虽然几乎每次我都这么说,但是我最终都找到了这个工具的用途。数据科学需要许多不同能力方面的知识,你永远不知道下一个你将使用的想法将来自哪里! -In the case of Bokeh, several weeks after trying it out, I found a perfect use case in my work as a data science researcher. My [research project](https://arpa-e.energy.gov/?q=slick-sheet-project/virtual-building-energy-audits) involves increasing the energy efficiency of commercial buildings using data science, and, for a [recent conference](http://www.arpae-summit.com/about/about-the-summit), we needed a way to show off the results of the many techniques we apply. The usual suggestion of a powerpoint gets the job done, but doesn’t really stand out. By the time most people at a conference see their third slide deck, they have already stopped paying attention. Although I didn’t yet know Bokeh very well, I volunteered to try and make an interactive application with the library, thinking it would allow me to expand my skill-set and create an engaging way to show off our project. Skeptical, our team prepared a back-up presentation, but after I showed them some prototypes, they gave it their full support. The final interactive dashboard was a stand-out at the conference and will be adopted by our team for future use: +作为一个数据科学研究人员,在试用了几个星期之后,我终于在 Bokeh 的例子中找到了一个完美的用例。我的[研究项目](https://arpa-e.energy.gov/?q=slick-sheet-project/virtual-building-energy-audits)涉及利用数据科学提高商业建筑的能源效率。[在最近的一次会议](http://www.arpae-summit.com/about/about-the-summit)上,我们需要用一种方法来展示我们使用的众多技术的成果。通常情况下都建议使用 powerpoint 来完成这项任务,但是效果并不明显。大多数在会议中的人在看到第三张幻灯片时,就已经失去耐心了。尽管我对 Bokeh 还不是很熟悉,但我仍然自愿尝试利用这个库做一个交互式应用程序,我认为这会扩展我的技能,创造一个吸引人的方式来展示我们的项目。安全起见,我们团队准备了一个演示的备份,但在我向他们展示了一些初稿之后,他们给予了全力支持。最终的交互式仪表板在会议上脱颖而出,未来我们的团队也将会使用: ![](https://cdn-images-1.medium.com/max/800/1*nN5-hITqzDlhelSJ2W9x5g.gif) -Example of Bokeh Dashboard built [for my research](https://arpa-e.energy.gov/?q=slick-sheet-project/virtual-building-energy-audits) +为[我的研究](https://arpa-e.energy.gov/?q=slick-sheet-project/virtual-building-energy-audits)构建的 Bokeh 仪表盘的例子 -While not every idea you see on Twitter is probably going to be helpful to your career, I think it’s safe to say that knowing more data science techniques can’t possibly hurt. Along these lines, I started this series to share the capabilities of [Bokeh](https://bokeh.pydata.org/en/latest/), a powerful plotting library in Python that allows you to make interactive plots and dashboards. Although I can’t share the dashboard for my research, I can show the basics of building visualizations in Bokeh using a publicly available dataset. This third post is a continuation of my Bokeh series, with [Part I focused on building a simple graph,](https://towardsdatascience.com/data-visualization-with-bokeh-in-python-part-one-getting-started-a11655a467d4) and [Part II showing how to add interactions to a Bokeh plot](https://towardsdatascience.com/data-visualization-with-bokeh-in-python-part-ii-interactions-a4cf994e2512). In this post, we will see how to set up a full Bokeh application and run a local Bokeh server accessible in your browser! +虽然说并不是每一个你在 Twitter上看到的想法都可能对你的职业生涯产生帮助,但我可以负责的说,了解更多的数据科学技术不会有什么坏处。沿着这些思路,我开始了本系列文章,以展示 Bokeh 的功能,[Bokeh](https://bokeh.pydata.org/en/latest/) 是 Python 中一个强大的绘图库,他可以允许你制作交互式绘图和仪表盘。尽管我不能向你展示我的研究的仪表盘,但是我可以使用公开可用的数据集展示在 Bokeh 中构建可视化的基础知识。第三篇文章是我的 Bokeh 系列文章的延续,[第一部分着重于构建一个简单的图](https://towardsdatascience.com/data-visualization-with-bokeh-in-python-part-one-getting-started-a11655a467d4),[第二部分展示如何向 Bokeh 图中添加交互](https://towardsdatascience.com/data-visualization-with-bokeh-in-python-part-ii-interactions-a4cf994e2512)。在这篇文章中,我们将看到如何设置一个完整的 Bokeh 应用程序,并在您的浏览器中运行可访问的本地 Bokeh 服务器! -This article will focus on the structure of a Bokeh application rather than the plot details, but the full code for everything can be found on [GitHub.](https://github.com/WillKoehrsen/Bokeh-Python-Visualization) We will continue to use the [NYCFlights13 dataset](https://cran.r-project.org/web/packages/nycflights13/nycflights13.pdf), a real collection of flight information from flights departing 3 NYC airports in 2013. There are over 300,000 flights in the dataset, and for our dashboard, we will focus primarily on exploring the arrival delay information. +本文将重点介绍 Bokeh 应用程序的结构,而不是具体的细节,但是你可以在 [GitHub](https://github.com/WillKoehrsen/Bokeh-Python-Visualization) 上找到所有内容的完整代码。我们将会使用 [NYCFlights13 数据集](https://cran.r-project.org/web/packages/nycflights13/nycflights13.pdf),这是一个 2013 年从纽约 3 个机场起飞的航班的真实信息数据集。这个数据集中有超过 300,000 个航班信息,对于我们的仪表盘,我们将主要关注于到达延迟信息的统计。 -To run the full application for yourself, make sure you have Bokeh installed ( using `pip install bokeh`), [download the](https://github.com/WillKoehrsen/Bokeh-Python-Visualization/blob/master/bokeh_app.zip) `[bokeh_app.zip](https://github.com/WillKoehrsen/Bokeh-Python-Visualization/blob/master/bokeh_app.zip)` [folder](https://github.com/WillKoehrsen/Bokeh-Python-Visualization/blob/master/bokeh_app.zip) from GitHub, unzip it, open a command window in the directory, and type `bokeh serve --show bokeh_app`. This will set-up a [local Bokeh server](https://bokeh.pydata.org/en/latest/docs/user_guide/server.html) and open the application in your browser (you can also make Bokeh plots available publicly online, but for now we will stick to local hosting). +为了能完整运行整个应用程序,你需要先确保你已经安装了 Bokeh(使用 `pip install bokeh`),从 GitHub上 [下载](https://github.com/WillKoehrsen/Bokeh-Python-Visualization/blob/master/bokeh_app.zip) `[bokeh_app.zip](https://github.com/WillKoehrsen/Bokeh-Python-Visualization/blob/master/bokeh_app.zip)` [文件夹](https://github.com/WillKoehrsen/Bokeh-Python-Visualization/blob/master/bokeh_app.zip),解压,并在当前目录打开一个命令窗口,并输入 `bokeh serve --show bokeh_app`。这会设置一个 [Bokeh 的本地服务](https://bokeh.pydata.org/en/latest/docs/user_guide/server.html) 同时还会在你的浏览器中打开一个应用(当然你也可以使用 Bokeh 的在线服务,但是目前对我们来说本地主机足矣)。 -### Final Product +### 最终产品 -Before we get into the details, let’s take a look at the end product we’re aiming for so we can see how the pieces fit together. Following is a short clip showing how we can interact with the complete dashboard: +在我们深入讨论细节之前,让我们先来看看我们的最终产品,这样我们就可以看到各个部分是如何组合在一起的。下面是一个短片,展示了我们如何与完整的仪表盘互动: - YouTube 视频链接:https://youtu.be/VWi3HAlKOUQ -Final Bokeh Flights Application +Bokeh 航班应用最终版 -Here I am using the Bokeh application in a browser (in Chrome’s fullscreen mode) that is running on a local server. At the top we see a number of tabs, each of which contains a different section of the application. The idea of a dashboard is that while each tab can stand on its own, we can join many of them together to enable a complete exploration of the data. The video shows the range of charts we can make with Bokeh, from histograms and density plots, to data tables that we can sort by column, to fully interactive maps. Besides the range of figures we can create in Bokeh, another benefit of using this library is interactions. Each tab has an interactive element which lets users engage with the data and make their own discoveries. From experience, when exploring a dataset, people like to come to insights on their own, which we can allow by letting them select and filter data through various controls. +我在本地服务器上运行的浏览器(在 Chrome 的全屏模式下)中使用 Bokeh 应用程序。在顶部我们看到许多选项卡,每个选项卡包含不同部分的应用程序。仪表盘的想法是,虽然每个选项卡可以独立存在,但是我们可以将其中许多选项卡连接在一起,以支持对数据的完整探索。这段视频展示了我们可以用 Bokeh 制作的图表的范围,从直方图和密度图,到可以按列排序的数据表,再到完全交互式的地图。使用 Bokeh 这个库除了可以创建丰富的图形外,另一个好处是交互。每个标签都有一个交互元素可以让用户参与到数据中,并自己探索。从经验来看,当探索一个数据集时,人们喜欢自己去洞察,我们可以让他们通过各种控件来选择和过滤数据。 -Now that we have an idea of the dashboard we are aiming for, let’s take a look at how to create a Bokeh application. I highly recommend [downloading the code](https://github.com/WillKoehrsen/Bokeh-Python-Visualization/tree/master/bokeh_app) for yourself to follow along! +现在我们对目标仪表盘已经有一个概念了,接下来让我们看看如何创建 Bokeh 应用程序。我强烈建议你[下载这些代码](https://github.com/WillKoehrsen/Bokeh-Python-Visualization/tree/master/bokeh_app),以供参考! * * * -### Structure of a Bokeh Application +### Bokeh 应用的结构 -Before writing any code, it’s important to establish a framework for our application. In any project, it’s easy to get carried away coding and soon become lost in a mess of half-finished scripts and out-of-place data files, so we want to create a structure beforehand for all our codes and data to slot into. This organization will help us keep track of all the elements in our application and assist in debugging when things inevitably go wrong. Also, we can re-use this framework for future projects so our initial investment in the planning stage will pay off down the road. +在编写任何代码之前,为我们的应用程序建立一个框架是很重要的。在任何项目中,很容易被编码冲昏头脑,很快就会迷失在一堆尚未完成的脚本和错位的数据文件中,因此我们想要在编写代码和插入数据前先创建一个框架。这个组织将帮助我们跟踪应用程序中的所有元素,并在不可避免地出错时帮助我们进行调试。此外,我们可以在未来的项目中复用这个框架,这样我们在规划阶段的初始投资将在未来得到回报。 -To set up a Bokeh application, I create one parent directory to hold everything called `bokeh_app` . Within this directory, we will have a sub-directory for our data (called `data`), a sub-directory for our scripts (`scripts`), and a `main.py` script to pull everything together. Generally, to manage all the code, I have found it best to keep the code for each tab in a separate Python script and call them all from a single main script. Following is the file structure I use for a Bokeh application, adapted from the [official documentation](https://bokeh.pydata.org/en/latest/docs/user_guide/server.html). +为了设置一个 Boken 应用,我创建了一个名为 `bokeh_app` 的根目录来保存所有内容。在这个目录中,我们创建了一个子目录用来存档数据(命名为 `data`),另一个子目录用来存放脚本文件(命名为 `script`)并通过一个 `main.py` 文件将所有的东西组合在一起。通常,为了管理所有代码,我发现最好将每个选项卡的代码保存在单独的 Python 脚本中,并从单个主脚本调用它们。下面是我为 Bokeh 应用程序所创建的文件结构,它改编自[官方文档](https://bokeh.pydata.org/en/latest/docs/user_guide/server.html)。 ``` bokeh_app @@ -59,17 +58,17 @@ bokeh_app +--- main.py ``` -For the flights application, the structure follows the general outline: +对于 flight 应用程序,其结构大致如下: ![](https://cdn-images-1.medium.com/max/800/1*MvlTa19t4B5MLhY6329B7Q.png) -Folder structure of flights dashboard +航班仪表盘的文件夹结构 -There are three main parts: `data`, `scripts`, and `main.py,` under one parent`bokeh_app` directory. When it comes time to run the server, we tell Bokeh to serve the `bokeh_app` directory and it will automatically search for and run the `main.py` script. With the general structure in place, let’s take a look at `main.py` which is what I like to call the executive of the Bokeh application (not a technical term)! + 在 `bokeh_app` 目录下有三个主要部分:`data`、`scripts` 和 `main.py`。当需要运行服务器时,我们在 `bokeh_app` 目录运行 Bokeh,它会自动搜索并运行 `main.py` 脚本。有了总体结构之后,让我们来看看 `main.py` 文件,我把它称为 Bokeh 应用程序的启动程序(并不是专业术语)! ### `main.py` -The `main.py` script is like the executive of a Bokeh application. It loads in the data, passes it out to the other scripts, gets back the resulting plots, and organizes them into one single display. This will be the only script I show in its entirety because of how critical it is to the application: + `main.py` 脚本是 Bokeh 应用程序的启动脚本。它加载数据,并把传递给其他脚本,获取结果图,并将它们组织好后单个显示出来。这将是我展示的唯一一个完整的脚本,因为它对应用程序非常重要: ``` # Pandas for data management @@ -115,17 +114,17 @@ tabs = Tabs(tabs = [tab1, tab2, tab3, tab4, tab5]) curdoc().add_root(tabs) ``` -We start out with the necessary imports including the functions to make the tabs, each of which is stored in a separate script within the `scripts` directory. If you look at the file structure, notice that there is an `__init__.py` file in the `scripts` directory. This is a completely blank file that needs to be placed in the directory to allow us to import the appropriate functions using relative statements (e.g. `from scripts.histogram import histogram_tab` ). I’m not quite sure why this is needed, but it works (here’s the [Stack Overflow answer](https://stackoverflow.com/a/48468292/5755357) I used to figure this out). +我们从必要的导包开始,包括创建选项卡的函数,每个选项卡都存储在 `scripts` 目录中的单独脚本中。如果你看下文件结构,注意这里有一个 `__init__.py` 文件在 `scripts` 目录中。这是一个完全空白的文件,需要放在目录中,以允许我们使用相对语句导入适当的函数(例如 `from scripts.histogram import histogram_tab`)。我不太清楚为什么需要这样做,但它确实有效(我曾经解决过这个问题,这里是 [Stack Overflow 的答案](https://stackoverflow.com/a/48468292/5755357))。 -After the library and script imports, we read in the necessary data with help from the [Python](https://stackoverflow.com/questions/9271464/what-does-the-file-variable-mean-do/9271617) `[__file__](https://stackoverflow.com/questions/9271464/what-does-the-file-variable-mean-do/9271617)` [attribute](https://stackoverflow.com/questions/9271464/what-does-the-file-variable-mean-do/9271617). In this case, we are using two pandas dataframes ( `flights` and `map_data` ) as well as US states data that is included in Bokeh. Once the data has been read in, the script proceeds to delegation: it passes the appropriate data to each function, the functions each draw and return a tab, and the main script organizes all these tabs in a single layout called `tabs`. As an example of what each of these separate tab functions does, let’s look at the function that draws the `map_tab`. +在导入库和脚本后,我们利用 [Python](https://stackoverflow.com/questions/9271464/what-does-the-file-variable-mean-do/9271617) `[__file__](https://stackoverflow.com/questions/9271464/what-does-the-file-variable-mean-do/9271617)` [属性](https://stackoverflow.com/questions/9271464/what-does-the-file-variable-mean-do/9271617)读取必要的数据。在本例中,我们使用了两个 pandas 数据框(`flights` 和 `map_data`)以及包含在 Bokeh 中的美国各州的数据。读取数据之后,脚本继续进行执行:它将适当的数据传递给每个函数,每个函数绘制并返回一个选项卡,主脚本将所有这些选项卡组织在一个称为 `tabs` 的布局中。作为这些独立选项卡函数的示例,让我们来看看绘制 `map_tab` 的函数。 -This function takes in `map_data` (a formatted version of the flights data) and the US state data and produces a map of flight routes for selected airlines: +该函数接收 `map_data`(航班数据的格式化版本)和美国各州数据,并为选定的航空公司生成航线图: ![](https://cdn-images-1.medium.com/max/1000/1*fnxAzaoSwqrhX2K7RZJdeg.png) -Map Tab +地图选项卡 -We covered interactive plots in Part II of this series, and this plot is just an implementation of that idea. The overall structure of the function is: +我们在本系列的第 2 部分中介绍了交互式情节,而这个情节只是该思想的一个实现。功能整体结构为: ``` def map_tab(map_data, states): @@ -149,9 +148,9 @@ def map_tab(map_data, states): return tab ``` -We see the familiar `make_dataset`, `make_plot`, and `update` functions used to [draw the plot with interactive controls](https://towardsdatascience.com/data-visualization-with-bokeh-in-python-part-ii-interactions-a4cf994e2512). Once we have the plot set up, the final line returns the entire plot to the main script. Each individual script (there are 5 for the 5 tabs) follows the same pattern. +我们看到了熟悉的 `make_dataset`、`make_plot` 和 `update` 函数,这些函数用于[使用交互式控件绘制绘图](https://towardsdatascience.com/data- visualiz-with - bokehin - pythonpart -ii-interactions-a4cf994e2512)。一旦我们设置好了图,最后一行将整个图返回给主脚本。每个单独的脚本(5 个选项卡对应 5 个选项卡)都遵循相同的模式。 -Returning to the main script, the final touch is to gather the tabs and add them to a single document. +回到主脚本,最后一步是收集选项卡并将它们添加到一个单独的文档中。 ``` # Put all the tabs into one application @@ -161,45 +160,44 @@ tabs = Tabs(tabs = [tab1, tab2, tab3, tab4, tab5]) curdoc().add_root(tabs) ``` -The tabs appear at the top of the application, and much like tabs in any browser, we can easily switch between them to explore the data. +选项卡显示在应用程序的顶部,就像任何浏览器中的选项卡一样,我们可以轻松地在它们之间切换以查看数据。 ![](https://cdn-images-1.medium.com/max/1000/1*CUyrsJpP5lkvVdheseAYXQ.png) -### Running the Bokeh Server +### 运行 Bokeh 服务 -After all the set-up and coding required to make the plots, running the Bokeh server locally is quite simple. We open up a command line interface (I prefer Git Bash but any one will work), change to the directory containing `bokeh_app` and run `bokeh serve --show bokeh_app`. Assuming everything is coded correctly, the application will automatically open in our browser at the address `http://localhost:5006/bokeh_app`. We can then access the application and explore our dashboard! +在完成所有的设置和编码之后,在本地运行 Bokeh 服务器非常简单。我们打开一个命令行界面(我更喜欢 Git Bash,但任何一个都可以),切换到包含 `bokeh_app` 的目录,并运行 `bokeh serve --show bokeh_app`。假设所有代码都正确,应用程序将自动在浏览器中打开地址 `http://localhost:5006/bokeh_app`。然后,我们就可以访问应用程序并查看我们的仪表盘了! ![](https://cdn-images-1.medium.com/max/800/1*6orEuCOf0HsnCp_wzKPs3A.gif) -Final Bokeh Flights Application +Bokeh 航班应用最终版 -#### Debugging in a Jupyter Notebook +#### 在 Jupyter Notebook 中调试 -If something goes wrong (as it undoubtedly will the first few times we write a dashboard) it can be frustrating to have to stop the server, make changes to the files, and restart the server to see if our changes had the desired effect. To quickly iterate and resolve problems, I generally develop plots in a Jupyter Notebook. The Jupyter Notebook is a great environment for Bokeh development because you can create and test fully interactive plots from within the notebook. The syntax is a little different, but once you have a completed plot, the code just needs to be slightly modified and can then be copied and pasted into a standalone `.py` script. To see this in action, take a look at the [Jupyter Notebook](https://github.com/WillKoehrsen/Bokeh-Python-Visualization/blob/master/application/app_development.ipynb) I used to develop the application. +如果出了什么问题(在我们刚开始编写仪表盘的时候,肯定会出现这种情况),令人沮丧的是,我们必须停止服务器、对文件进行更改并重新启动服务器,以查看我们的更改是否达到了预期的效果。为了快速迭代和解决问题,我通常在 Jupyter Notebook 中开发图。Jupyter Notebook 对 Bokeh 来说是一个很好的开发环境,因为你可以在笔记本中创建和测试完全交互式的绘图。语法略有不同,但一旦你有了一个完整的图,代码只需稍加修改,就可以复制粘贴到一个独立的 `.py` 脚本。要了解这一点的实际应用,请查看 [Jupyter Notebook](https://github.com/WillKoehrsen/Bokeh-Python-Visualization/blob/master/application/app_development.ipynb)。 * * * -### Conclusions +### 总结 -A fully interactive Bokeh dashboard makes any data science project stand out. Oftentimes, I see my colleagues do a lot of great statistical work but then fail to clearly communicate the results, which means all that work doesn’t get the recognition it deserves. From personal experience, I have also seen how effective Bokeh applications can be in communicating results. While making a full dashboard is a lot of work (this one is over 600 lines of code!) the results are worthwhile. Moreover, once we have an application, we can quickly share it using GitHub and if we are smart about our structure, we can re-use the framework for additional projects. +一个完全可交互式的 Bokeh 仪表盘使任何数据科学项目脱颖而出。我经常看到我的同事们做了很多非常棒的统计工作,但却不能清楚地传达结果,这意味着所有这些工作都没有得到应有的认可。从个人经验来看,我也看到了 Bokeh 应用程序在交流结果方面是多么有效。虽然制作一个完整的仪表板需要做很多工作(超过 600 行代码),但是结果是值得的。此外,一旦我们有了一个应用程序,我们就可以使用 GitHub 快速地共享它,如果我们对我们的结构很了解,我们就可以在其他项目中重用这个框架。 -The key points to take away from this project are applicable to many data science projects in general: +从这个项目中得出的关键点适用于许多常规数据科学项目: -1. Having the proper framework/structure in place before you start on a data science task — Bokeh or anything else — is crucial. This way, you won’t find yourself lost in a forest of code trying to find errors. Also, once we develop a framework that works, it can be re-used with minimal effort, leading to dividends far down the road. +1. 在开始一项数据科学任务之前,拥有适当的框架/结构(Bokeh 或其他的框架)是至关重要的。这样,您就不会发现自己迷失在试图查找错误的代码森林中。而且,一旦我们开发了一个有效的框架,它就可以以最小的工作量被复用,从而在未来带来收益。 -2. Finding a debugging cycle that allows you to quickly iterate through ideas is crucial. The write code —see results — fix errors loop allowed by the Jupyter Notebook makes for a productive development cycle (at least for small scale projects). +2. 找到一个调试周期,使你能够快速进行想法迭代是至关重要的。Jupyter Notebook 支持编写代码—查看结果—修复错误的循环,这有助于提高开发周期的效率(至少对于小型项目来说是这样)。 -3. Interactive applications in Bokeh will elevate your project and encourage user engagement. A dashboard can be a stand alone exploratory project, or highlight all the tough analysis work you’ve already done! +3. Bokeh 中的交互式应用程序将提升您的项目并鼓励用户参与。仪表盘可以是独立的探索性项目,也可以突出显示你已经完成的所有艰难的分析工作! -4. You never know where you will find the next tool you will use in your work or side projects. Keep your eyes open, and don’t be afraid to experiment with new software and techniques! +4. 你永远不知道在哪里可以找到下一个你在工作中能用到的或有帮助的工具。所以睁大你的眼睛,不要害怕尝试新的软件和技术! -That’s all for this post and for this series, although I plan on releasing additional stand-alone tutorials on Bokeh in the future. With libraries like Bokeh and plot.ly it’s becoming easier to make interactive figures and having a way to present your data science results in a compelling manner is crucial. Check out this [Bokeh GitHub repo](https://github.com/WillKoehrsen/Bokeh-Python-Visualization) for all my work and feel free to fork and get started with your own projects. For now, I’m eager to see what everyone else can create! +这就是本文和本系列的全部内容,尽管我计划在未来在额外发布有关 Bokeh 的独立教程。以一种令人信服的方式展示数据科学成果是至关重要的,有了像 Bokeh 和 plot.ly 这样的库,制作交互式图形变得越来越容易。你可以在 [Bokeh GitHub repo](https://github.com/WillKoehrsen/Bokeh-Python-Visualization) 查看我所有的工作,免费 fork 它并开始你自己的项目。现在,我渴望看到其他人能创造出什么! -As always, I welcome feedback and constructive criticism. I can be reached on Twitter [@koehrsen_will](https://twitter.com/koehrsen_will). +一如既往地,我欢迎反馈和建设性的批评。你可以通过 Twitter [@koehrsen_will](https://twitter.com/koehrsen_will) 联系到我。 > 如果发现译文存在错误或其他需要改进的地方,欢迎到 [掘金翻译计划](https://github.com/xitu/gold-miner) 对译文进行修改并 PR,也可获得相应奖励积分。文章开头的 **本文永久链接** 即为本文在 GitHub 上的 MarkDown 链接。 - --- > [掘金翻译计划](https://github.com/xitu/gold-miner) 是一个翻译优质互联网技术文章的社区,文章来源为 [掘金](https://juejin.im) 上的英文分享文章。内容覆盖 [Android](https://github.com/xitu/gold-miner#android)、[iOS](https://github.com/xitu/gold-miner#ios)、[前端](https://github.com/xitu/gold-miner#前端)、[后端](https://github.com/xitu/gold-miner#后端)、[区块链](https://github.com/xitu/gold-miner#区块链)、[产品](https://github.com/xitu/gold-miner#产品)、[设计](https://github.com/xitu/gold-miner#设计)、[人工智能](https://github.com/xitu/gold-miner#人工智能)等领域,想要查看更多优质译文请持续关注 [掘金翻译计划](https://github.com/xitu/gold-miner)、[官方微博](http://weibo.com/juejinfanyi)、[知乎专栏](https://zhuanlan.zhihu.com/juejinfanyi)。 From e29215734af5375acb90452080a9e8abb9c08e6c Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E7=94=9F=E7=B3=B8?= Date: Sun, 13 Jan 2019 16:05:16 +0800 Subject: [PATCH 48/54] =?UTF-8?q?=E4=BD=BF=E7=94=A8=20Stripe,=20Vue.js=20?= =?UTF-8?q?=E5=92=8C=20Flask=20=E6=8E=A5=E5=8F=97=E4=BB=98=E6=AC=BE=20(#49?= =?UTF-8?q?76)?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * 使用 Stripe, Vue.js 和 Flask 接受付款 使用 Stripe, Vue.js 和 Flask 接受付款 * 按照 kasheemlew 的建议进行修改 * Update accepting-payments-with-stripe-vuejs-and-flask.md --- ...ng-payments-with-stripe-vuejs-and-flask.md | 276 +++++++++--------- 1 file changed, 138 insertions(+), 138 deletions(-) diff --git a/TODO1/accepting-payments-with-stripe-vuejs-and-flask.md b/TODO1/accepting-payments-with-stripe-vuejs-and-flask.md index 57d924122f2..2dee926ea79 100644 --- a/TODO1/accepting-payments-with-stripe-vuejs-and-flask.md +++ b/TODO1/accepting-payments-with-stripe-vuejs-and-flask.md @@ -2,26 +2,26 @@ > * 原文作者:[Michael Herman](https://testdriven.io/authors/herman/) > * 译文出自:[掘金翻译计划](https://github.com/xitu/gold-miner) > * 本文永久链接:[https://github.com/xitu/gold-miner/blob/master/TODO1/accepting-payments-with-stripe-vuejs-and-flask.md](https://github.com/xitu/gold-miner/blob/master/TODO1/accepting-payments-with-stripe-vuejs-and-flask.md) -> * 译者: -> * 校对者: +> * 译者:[Mcskiller](https://github.com/Mcskiller) +> * 校对者:[kasheemlew](https://github.com/kasheemlew) -# Accepting Payments with Stripe, Vue.js, and Flask +# 使用 Stripe, Vue.js 和 Flask 接受付款 ![](https://testdriven.io/static/images/blog/flask-vue-stripe/payments_vue_flask.png) -In this tutorial, we'll develop a web app for selling books using [Stripe](https://stripe.com/) (for payment processing), [Vue.js](https://vuejs.org/) (the client-side app), and [Flask](http://flask.pocoo.org/) (the server-side API). +在本教程中,我们将会开发一个使用 [Stripe](https://stripe.com/)(处理付款订单),[Vue.js](https://vuejs.org/)(客户端应用)以及 [Flask](http://flask.pocoo.org/)(服务端 API)的 web 应用来售卖书籍。 -> This is an intermediate-level tutorial. It assumes that you a have basic working knowledge of Vue and Flask. Review the following resources for more info: +> 这是一个进阶教程。我们默认您已经基本掌握了 Vue.js 和 Flask。如果你还没有了解过它们,请查看下面的链接以了解更多: > > 1. [Introduction to Vue](https://vuejs.org/v2/guide/index.html) > 2. [Flaskr: Intro to Flask, Test-Driven Development (TDD), and JavaScript](https://github.com/mjhea0/flaskr-tdd) -> 3. [Developing a Single Page App with Flask and Vue.js](https://testdriven.io/developing-a-single-page-app-with-flask-and-vuejs) +> 3. [用 Flask 和 Vue.js 开发一个单页面应用](https://juejin.im/post/5c1f7289f265da612e28a214) -_Final app_: +**最终效果**: ![final app](https://testdriven.io/static/images/blog/flask-vue-stripe/final.gif) -_Main dependencies:_ +**主要依赖**: * Vue v2.5.2 * Vue CLI v2.9.3 @@ -30,31 +30,31 @@ _Main dependencies:_ * Flask v1.0.2 * Python v3.6.5 -## Contents +## 目录 -* [Objectives](#objectives) -* [Project Setup](#project-setup) -* [What are we building?](#what-are-we-building) -* [Books CRUD](#books-crud) -* [Order Page](#order-page) -* [Form Validation](#form-validation) +* [目的](#目的) +* [项目安装](#项目安装) +* [我们要做什么?](#我们要做什么?) +* [CRUD 书籍](#CRUD-书籍) +* [订单页面](#订单页面) +* [表单验证](#表单验证) * [Stripe](#stripe) -* [Order Complete Page](#order-complete-page) -* [Conclusion](#conclusion) +* [订单完成页面](#订单完成页面) +* [总结](#总结) -## Objectives +## 目的 -By the end of this tutorial, you should be able to... +在本教程结束的时候,你能够... -1. Work with an existing CRUD app, powered by Vue and Flask -2. Create an order checkout component -3. Validate a form with vanilla JavaScript -4. Use Stripe to validate credit card information -5. Process payments using the Stripe API +1. 获得一个现有的 CRUD 应用,由 Vue 和 Flask 驱动 +2. 创建一个订单结算组件 +3. 使用原生 JavaScript 验证一个表单 +4. 使用 Stripe 验证信用卡信息 +5. 通过 Stripe API 处理付款 -## Project Setup +## 项目安装 -Clone the [flask-vue-crud](https://github.com/testdrivenio/flask-vue-crud) repo, and then check out the [v1](https://github.com/testdrivenio/flask-vue-crud/releases/tag/v1) tag to the master branch: +Clone [flask-vue-crud](https://github.com/testdrivenio/flask-vue-crud) 仓库,然后在 master 分支找到 [v1](https://github.com/testdrivenio/flask-vue-crud/releases/tag/v1) 标签: ``` $ git clone https://github.com/testdrivenio/flask-vue-crud --branch v1 --single-branch @@ -62,7 +62,7 @@ $ cd flask-vue-crud $ git checkout tags/v1 -b master ``` -Create and activate a virtual environment, and then spin up the Flask app: +搭建并激活一个虚拟环境,然后运行 Flask 应用: ``` $ cd server @@ -72,15 +72,15 @@ $ source env/bin/activate (env)$ python app.py ``` -> The above commands, for creating and activating a virtual environment, may differ depending on your environment and operating system. +> 上述搭建环境的命令可能因操作系统和运行环境而异。 -Point your browser of choice at [http://localhost:5000/ping](http://localhost:5000/ping). You should see: +用浏览器访问 [http://localhost:5000/ping](http://localhost:5000/ping)。你会看到: ``` "pong!" ``` -Then, install the dependencies and run the Vue app in a different terminal tab: +然后,安装依赖并在另一个终端中运行 Vue 应用: ``` $ cd client @@ -88,33 +88,33 @@ $ npm install $ npm run dev ``` -Navigate to [http://localhost:8080](http://localhost:8080). Make sure the basic CRUD functionality works as expected: +转到 [http://localhost:8080](http://localhost:8080)。确保 CRUD 基本功能正常工作: ![v1 app](https://testdriven.io/static/images/blog/flask-vue-stripe/v1.gif) -> Want to learn how to build this project? Check out the [Developing a Single Page App with Flask and Vue.js](https://testdriven.io/developing-a-single-page-app-with-flask-and-vuejs) blog post. +> 想学习如何构建这个项目?查看 [用 Flask 和 Vue.js 开发一个单页面应用](https://juejin.im/post/5c1f7289f265da612e28a214) 文章。 -## What are we building? +## 我们要做什么? -Our goal is to build a web app that allows end users to purchase books. +我们的目标是构建一个允许终端用户购买书籍的 web 应用。 -The client-side Vue app will display the books available for purchase, collect payment information, obtain a token from Stripe, and send that token along with the payment info to the server-side. +客户端 Vue 应用将会显示出可供购买的书籍并记录付款信息,然后从 Stripe 获得 token,最后发送 token 和付款信息到服务端。 -The Flask app then takes that info, packages it together, and sends it to Stripe to process charges. +然后 Flask 应用获取到这些信息,并把它们都打包发送到 Stripe 去处理。 -Finally, we'll use a client-side Stripe library, [Stripe.js](https://stripe.com/docs/stripe-js/v2), to generate a unique token for creating a charge and a server-side Python [library](https://github.com/stripe/stripe-python) for interacting with the Stripe API. +最后,我们会用到一个客户端 Stripe 库 [Stripe.js](https://stripe.com/docs/stripe-js/v2),它会生成一个专有 token 来创建账单,然后使用服务端 Python [Stripe 库](https://github.com/stripe/stripe-python)和 Stripe API 交互。 ![final app](https://testdriven.io/static/images/blog/flask-vue-stripe/final.gif) -> Like the previous [tutorial](https://testdriven.io/developing-a-single-page-app-with-flask-and-vuejs), we'll only be dealing with the happy path through the app. Check your understanding by incorporating proper error-handling on your own. +> 和之前的 [教程](https://testdriven.io/developing-a-single-page-app-with-flask-and-vuejs) 一样,我们会简化步骤,你应该自己处理产生的其他问题,这样也会加强你的理解。 -## Books CRUD +## CRUD 书籍 -First, let's add a purchase price to the existing list of books on the server-side and update the appropriate CRUD functions on the client - GET, POST, and PUT. +首先,让我们将购买价格添加到服务器端的现有书籍列表中,然后在客户端上更新相应的 CRUD 函数 GET,POST 和 PUT。 ### GET -Start by adding the `price` to each dict in the `BOOKS` list in _server/app.py_: +首先在 **server/app.py** 中添加 `price` 到 `BOOKS` 列表的每一个字典元素中: ``` BOOKS = [ @@ -142,7 +142,7 @@ BOOKS = [ ] ``` -Then, update the table in the `Books` component, _client/src/components/Books.vue_, to display the purchase price: +然后,在 `Books` 组件 **client/src/components/Books.vue** 中更新表格以显示购买价格。 ``` @@ -182,13 +182,13 @@ Then, update the table in the `Books` component, _client/src/components/Books.vu
``` -You should now see: +你现在应该会看到: ![default vue app](https://testdriven.io/static/images/blog/flask-vue-stripe/price.png) ### POST -Add a new `b-form-group` to the `addBookModal`, between the author and read `b-form-group`s: +添加一个新 `b-form-group` 到 `addBookModal` 中,在 Author 和 read 的 `b-form-group` 类之间: ``` ``` -The modal should now look like: +这个模态现在看起来应该是这样: ``` @@ -253,7 +253,7 @@ The modal should now look like: ``` -Then, add `price` to the state: +然后,添加 `price` 到 `addBookForm` 属性中: ``` addBookForm: { @@ -264,11 +264,11 @@ addBookForm: { }, ``` -The state is now bound to the form's input value. Think about what this means. When the state is updated, the form input will be updated as well - and vice versa. Here's an example of this in action with the [vue-devtools](https://github.com/vuejs/vue-devtools) browser extension: +`addBookForm` 现在和表单的输入值进行了绑定。想想这意味着什么。当 `addBookForm` 被更新时,表单的输入值也会被更新,反之亦然。以下是 [vue-devtools](https://github.com/vuejs/vue-devtools) 浏览器扩展的示例。 ![state model bind](https://testdriven.io/static/images/blog/flask-vue-stripe/state-model-bind.gif) -Add the `price` to the `payload` in the `onSubmit` method like so: +将 `price` 添加到 `onSubmit` 方法的 `payload` 中,像这样: ``` onSubmit(evt) { @@ -287,7 +287,7 @@ onSubmit(evt) { }, ``` -Update `initForm` to clear out the value after the end user submits the form or clicks the "reset" button: +更新 `initForm` 函数,在用户提交表单点击 "重置" 按钮后清除已有的值: ``` initForm() { @@ -302,7 +302,7 @@ initForm() { }, ``` -Finally, update the route in _server/app.py_: +最后,更新 **server/app.py** 中的路由: ``` @app.route('/books', methods=['GET', 'POST']) @@ -323,35 +323,35 @@ def all_books(): return jsonify(response_object) ``` -Test it out! +赶紧测试一下吧! ![add book](https://testdriven.io/static/images/blog/flask-vue-stripe/add-book.gif) -> Don't forget to handle errors on both the client and server! +> 不要忘了处理客户端和服务端的错误! ### PUT -Do the same, on your own, for editing a book: +同样的操作,不过这次是编辑书籍,该你自己动手了: -1. Add a new form input to the modal -2. Update `editForm` in the state -3. Add the `price` to the `payload` in the `onSubmitUpdate` method -4. Update `initForm` -5. Update the server-side route +1. 添加一个新输入表单到模态中 +2. 更新属性中的 `editForm` 部分 +3. 添加 `price` 到 `onSubmitUpdate` 方法的 `payload` 中 +4. 更新 `initForm` +5. 更新服务端路由 -> Need help? Review the previous section again. You can also grab the final code from the [flask-vue-crud](https://github.com/testdrivenio/flask-vue-crud) repo. +> 需要帮助吗?重新看看前面的章节。或者你可以从 [flask-vue-crud](https://github.com/testdrivenio/flask-vue-crud) 仓库获得源码。 ![edit book](https://testdriven.io/static/images/blog/flask-vue-stripe/edit-book.gif) -## Order Page +## 订单页面 -Next, let's add an order page where users will be able to enter their credit card information to purchase a book. +接下来,让我们添加一个订单页面,用户可以在其中输入信用卡信息来购买图书。 -TODO: add image +TODO:添加图片 -### Add a purchase button +### 添加一个购买按钮 -Start by adding a "purchase" button to the `Books` component, just below the "delete" button: +首先给 `Books` 组件添加一个“购买”按钮,就在“删除”按钮的下方: ``` @@ -373,13 +373,13 @@ Start by adding a "purchase" button to the `Books` component, just below the "de ``` -Here, we used the [router-link](https://router.vuejs.org/api/#router-link) component to generate an anchor tag that links back to a route in _client/src/router/index.js_, which we'll set up shortly. +这里,我们使用了 [router-link](https://router.vuejs.org/api/#router-link) 组件来生成一个连接到 **client/src/router/index.js** 中的路由的锚点,我们马上就会用到它。 ![default vue app](https://testdriven.io/static/images/blog/flask-vue-stripe/purchase-button.png) -### Create the template +### 创建模板 -Add a new component file called _Order.vue_ to "client/src/components": +添加一个叫做 **Order.vue** 的新组件文件到 **client/src/components**: ``` ``` -> You'll probably want to collect the buyer's contact details, like first and last name, email address, shipping address, and so on. Do this on your own. +> 你可能会想收集买家的联系信息,比如姓名,邮件地址,送货地址等等。这就得靠你自己了。 -### Add the route +### 添加路由 -_client/src/router/index.js_: +**client/src/router/index.js**: ``` import Vue from 'vue'; @@ -481,17 +481,17 @@ export default new Router({ }); ``` -Test it out. +测试一下。 ![order page](https://testdriven.io/static/images/blog/flask-vue-stripe/order-page.gif) -### Get the product info +### 获取产品信息 -Next, let's update the placeholders for the book title and amount on the order page: +接下来,让我们在订单页面 上更新书名和金额的占位符: ![order page](https://testdriven.io/static/images/blog/flask-vue-stripe/order-page-placeholders.png) -Hop back over to the server-side and update the following route handler: +回到服务端并更新以下路由接口: ``` @app.route('/books/', methods=['GET', 'PUT', 'DELETE']) @@ -521,7 +521,7 @@ def single_book(book_id): return jsonify(response_object) ``` -Now, we can hit this route to add the book information to the order page within the `script` section of the component: +我们可以在 `script` 中使用这个路由向订单页面添加书籍信息: ``` ``` -> Shipping to production? You will want to use an environment variable to dynamically set the base server-side URL (which is currently `http://localhost:5000`). Review the [docs](https://vuejs-templates.github.io/webpack/env.html) for more info. +> 转到生产环境?你将需要使用环境变量来动态设置基本服务器端 URL(现在 URL 为 `http://localhost:5000`)。查看 [文档](https://vuejs-templates.github.io/webpack/env.html) 获取更多信息。 -Then, update the first `ul` in the template: +然后,更新 template 中的第一个 `ul`: ```
    @@ -569,15 +569,15 @@ Then, update the first `ul` in the template:
``` -You should now see: +你现在会看到: ![order page](https://testdriven.io/static/images/blog/flask-vue-stripe/order-page-sans-placeholders.png) -## Form Validation +## 表单验证 -Let's set up some basic form validation. +让我们设置一些基本的表单验证。 -Use the `v-model` directive to [bind](https://vuejs.org/v2/guide/forms.html) form input values back to the state: +使用 `v-model` 指令去 [绑定](https://vuejs.org/v2/guide/forms.html) 表单输入值到属性中: ```
@@ -608,7 +608,7 @@ Use the `v-model` directive to [bind](https://vuejs.org/v2/guide/forms.html) for
``` -Add the card to the state like so: +添加 card 属性,就像这样: ``` card: { @@ -618,13 +618,13 @@ card: { }, ``` -Next, update the "submit" button so that when the button is clicked, the normal browser behavior is [ignored](https://vuejs.org/v2/guide/events.html#Event-Modifiers) and a `validate` method is called instead: +接下来,更新“提交”按钮,以便在单击按钮时忽略正常的浏览器行为,并调用 `validate` 方法: ``` ``` -Add an array to the state to hold any validation errors: +将数组添加到属性中以保存验证错误信息: ``` data() { @@ -645,7 +645,7 @@ data() { }, ``` -Just below the form, we can iterate and display the errors: +就添加在表单的下方,我们能够依次显示所有错误: ```
@@ -658,7 +658,7 @@ Just below the form, we can iterate and display the errors:
``` -Add the `validate` method: +添加 `validate` 方法: ``` validate() { @@ -682,9 +682,9 @@ validate() { }, ``` -Since all fields are required, we are simply validating that each field has a value. Keep in mind that Stripe will validate the actual credit card info, which you'll see in the next section, so you don't need to go overboard with form validation. That said, be sure to validate any additional fields that you may have added on your own. +由于所有字段都是必须填入的,而我们只是验证了每一个字段是否都有一个值。Stripe 将会验证下一节你看到的信用卡信息,所以你不必过度验证表单信息。也就是说,只需要保证你自己添加的其他字段通过验证。 -Finally, add a `createToken` method: +最后,添加 `createToken` 方法: ``` createToken() { @@ -693,19 +693,19 @@ createToken() { }, ``` -Test this out. +测试一下。 ![form validation](https://testdriven.io/static/images/blog/flask-vue-stripe/form-validation.gif) ## Stripe -Sign up for a [Stripe](https://stripe.com) account, if you don't already have one, and grab the _test mode_ [API Publishable key](https://stripe.com/docs/keys). +如果你没有 [Stripe](https://stripe.com) 账号的话需要先注册一个,然后再去获取你的 测试模式 [API Publishable key](https://stripe.com/docs/keys)。 ![stripe dashboard](https://testdriven.io/static/images/blog/flask-vue-stripe/stripe-dashboard-keys-publishable.png) -### Client-side +### 客户端 -Add the key to the state along with `stripeCheck` (which will be used to disable the submit button): +添加 stripePublishableKey 和 `stripeCheck`(用来禁用提交按钮)到 data 中: ``` data() { @@ -728,9 +728,9 @@ data() { }, ``` -> Make sure to add your own Stripe key to the above code. +> 确保添加你自己的 Stripe key 到上述代码中。 -Again, if the form is valid, the `createToken` method is triggered, which validates the credit card info (via [Stripe.js](https://stripe.com/docs/stripe-js/v2)) and then either returns an error (if invalid) or a unique token (if valid): +同样,如果表单有效,触发 `createToken` 方法(通过 [Stripe.js](https://stripe.com/docs/stripe-js/v2))验证信用卡信息然后返回一个错误信息(如果无效)或者返回一个 token(如果有效): ``` createToken() { @@ -749,7 +749,7 @@ createToken() { }, ``` -If there are no errors, we send the token to the server, where we'll charge the card, and then send the user back to the main page: +如果没有错误的话,我们就发送 token 到服务器,在那里我们会完成扣费并把用户转回主页: ``` createToken() { @@ -780,7 +780,7 @@ createToken() { }, ``` -Update `createToken()` with the above code, and then add [Stripe.js](https://stripe.com/docs/stripe-js/v2) to _client/index.html_: +按照上述代码更新 `createToken()`,然后添加 [Stripe.js](https://stripe.com/docs/stripe-js/v2) 到 **client/index.html** 中: ``` @@ -798,9 +798,9 @@ Update `createToken()` with the above code, and then add [Stripe.js](https://str ``` -> Stripe supports v2 and v3 ([Stripe Elements](https://stripe.com/elements)) of Stripe.js. If you're curious about Stripe Elements and how you can integrate it into Vue, refer to the following resources: 1. [Stripe Elements Migration Guide](https://stripe.com/docs/stripe-js/elements/migrating) 1\. [Integrating Stripe Elements and Vue.js to Set Up a Custom Payment Form](https://alligator.io/vuejs/stripe-elements-vue-integration/) +> Stripe 支持 v2 和 v3([Stripe Elements](https://stripe.com/elements))版本的 Stripe.js。如果你对 Stripe Elements 和如何把它集成到 Vue 中感兴趣,参阅以下资源:1. [Stripe Elements 迁移指南](https://stripe.com/docs/stripe-js/elements/migrating) 2. [集成 Stripe Elements 和 Vue.js 来创建一个自定义付款表单](https://alligator.io/vuejs/stripe-elements-vue-integration/) -Now, when `createToken` is triggered, `stripeCheck` is set to `true`. To prevent duplicate charges, let's disable the "submit" button when `stripeCheck` is `true`: +现在,当 `createToken` 被触发是,`stripeCheck` 值被更改为 `true`,为了防止重复收费,我们在 `stripeCheck` 值为 `true` 时禁用“提交”按钮: ```