diff --git a/locale/uk/404.md b/locale/uk/404.md new file mode 100755 index 0000000000000..4481f3bdb99ff --- /dev/null +++ b/locale/uk/404.md @@ -0,0 +1,7 @@ +--- +layout: page.hbs +permalink: false +title: 404 +--- +## 404: Сторінку не знайдено +### ENOENT: немає такого файла або каталога diff --git a/locale/uk/about/advisory-board/index.md b/locale/uk/about/advisory-board/index.md new file mode 100755 index 0000000000000..b41215a93deb3 --- /dev/null +++ b/locale/uk/about/advisory-board/index.md @@ -0,0 +1,220 @@ +--- +layout: about.hbs +title: Консультативна рада +--- +# Хартія консультативної ради Node.js + +## 1. Background + +The Node.js open source project is continuing its rapid growth of adoption in +the market. Given the large numbers of contributors, users, and companies with +a stake in the future of the project, the project leadership is looking to +supplement the current governance and contribution mechanisms with an advisory +board, as part of its long-term commitment to create a more open governance +model. + +## 2. Purpose + +### 2.1 + +The primary purpose of the Node.js Advisory Board is to advise Joyent and the +Node.js project core committers team leadership on matters related to +supporting the long-term governance, structure, and roadmap of the Node.js open +source project. The following main areas are included in this charter: + + * Provide a forum for individuals, users, and companies to discuss the issues + under the scope listed below. + * Provide guidance and input to leadership, and where possible, present a + consistent and consolidated opinion from the broader Node.js community. + +### 2.2 + +The Node.js Advisory Board is not: + + * Intended to serve as an authoritative governance board. The Node.js + Advisory Board advises, but does not manage the Node.js project core + committers team leadership. + * Intended to replace existing mechanisms for community input, governance, + or contribution. + * Intended to assume a formal, fiduciary role with respect to the project. + The Node.js Advisory Board members will not be asked to provide funds to + the project, assume liabilities with respect to the project or their + activities, or assume responsibility for enforcing either trademarks or + Node.js Advisory Board recommendations. + +## 3. Scope + +### 3.1 + +The Node.js Advisory Board is expected to provide input and formal +recommendations regarding the following areas: + + * Node.js project long term roadmap + * Node.js project policies and procedures around maintenance, contributions, + core team membership and governance. + * Node.js project policies and procedures around intellectual property, + trademark, and licensing + * Node.js project release schedules + +## 4. Meetings and Memberships + +### 4.1 General + + * The Node.js Advisory Board will have 13 members + * The Node.js core committers project lead: TJ Fontaine + * 2 seats for the top core technical contributors + * Up to 8 additional seats: 4 corporate seats, 4 “user” seats + * One curator seat + * One Open Source Software guidance seat + * No fee or sponsorship is required for membership + * The membership term will last 12 months. With the exception of the Project + Lead, all members can serve a maximum of two consecutive terms + +### 4.2 + +The selection process is intended to be open, transparent, and guided by +objective criteria for membership. + +### 4.3 + +The Curator shall prepare an agenda for and preside over regular meetings of +the Node.js Advisory Board. These meetings shall occur as frequently as the +Node.js Advisory Board determines is in the project’s best interest, but no +less than quarterly. + +### 4.4 + +A member of the Node.js Advisory Board may be removed by a resolution of the +Node.js Advisory Board supported by more than two thirds of its membership. + +### 4.5 + +The Node.js Advisory Board may fill any vacancy arising by removal or +resignation by a simple majority vote to fill the remainder of the term of the +vacating member. + +### 4.6 + +The rules of election and membership outlined in this section may be varied by +a resolution of the Node.js Advisory Board supported by more than two thirds of +its voting membership. + +### 4.7 + +All project contributors are welcome to observe at Node.js Advisory Board +meetings. + +## 5. Selection Process + +### 5.1 Contributors + +Two seats will be granted to the top technical contributors, as measured by +non-trivial pull requests as determined by the core contributor team that have +been merged into the master in the previous 6 months.  These seats will be +reserved for active individual contributors who are neither employees of +Joyent, Inc. nor employees of companies that hold a corporate seat. + +### 5.2 Corporate seats + +Nomination is restricted to companies for whom all three of the following are +true: + + * Are in the top 5 companies in terms of non-trivial pull requests merged + into the master in the past six months as measured by contributions by the + entire organization. + * Have one or more employees for whom a key component of their job + description is to contribute to Node.js and/or make significant + contributions to the Node.js source code base. + * Have committed to integrate Node.js into widely used corporate products in + a manner consistent with Core Criteria listed in Section 8 below. + +### 5.3 + +Once nominations haves been closed, selection of corporate seats will be made +by a vote by eligible contributors. Eligible contributors are those who remain +active as a contributor and have had at least one non-trivial pull request +merged to master in the previous six months. + +### 5.4 User seats + +These seats are for organizations that are using Node.js. To be nominated, an +end-user company must currently be using Node.js in production and have +published a use case on the Node.js website. Once nominations have been +closed, selection will be made by a vote by eligible contributors. Eligible +contributors are those who are currently active and have had at least one +non-trivial pull request merged to master in the past six months. + +## 6. Operation + +### 6.1 + +The Node.js Advisory Board is authorized to seek advice and counsel from other +interested parties and invited experts as appropriate. + +### 6.2 + +Any outside party wishing to bring an issue before the Node.js Advisory Board +may do so by emailing the Node.js Advisory Board at +[advisoryboard@nodejs.org](mailto:advisoryboard@nodejs.org). + +### 6.3 + +The Node.js Advisory Board shall provide transparent and timely reporting +(through any mechanism it deems appropriate) to the Node.js community at large +on all of its activities, subject to the right of any individual to designate +their comments and the ensuing discussion as "in confidence," in which case the +public report shall contain only a note of the request and an agreed summary +(if any) of the substance. + +### 6.4 + +The Node.js Advisory Board is being formed at the discretion of Joyent. Joyent +alone may decide to terminate the Node.js Advisory Board in its sole +discretion; provided however, that Joyent shall first consult the Node.js +Advisory Board and Curator. + +### 6.5 + +The Node.js Advisory Board and its members shall abide by appropriate antitrust +guidelines. + +## 7. Open Governance Principles + +The Node.js Advisory Board will formulate recommendations in conjunction with +the following open governance principles: + + * Open Contributions: anyone should be able to participate and contribute. + All bugs and tasks will be tracked in a public tracker and all of the + source code and all of the tools needed to build it will be available under + an open license permitting unrestricted use. + * Open technical meritocracy: technical merit over pride of authorship. Code + is contributed for the express purpose of advancing technologies relevant + to the project, effectively separating technology advancement from + individual or commercial intent. + * Open design: Roadmaps are discussed in the open, and designs receive input + from all committers and contributors. + * Influence through contribution: organizations and individuals gain + influence over the project through contribution. + * Open Licensing: code is licensed under the MIT license. + +## 8. Core Criteria + +Core Criteria will generally cover such areas as: use of standard APIs, testing +harness, quality assurance, upstream contribution models, and alternative +distributions. + +As Core Criteria will not be fully defined when the initial Node.js Advisory +Board membership is formulated, it is understood that there is a possibility +that certain members of the initial Node.js Advisory Board may not agree with +the Core Criteria when they are fully defined or may have products/offerings +that are not in compliance with the Core Criteria at the time they are +finalized. In this case, the corporate members will either agree to become +compliant within a specified timeframe or else resign their Node.js Advisory +Board position. Read more about the announcement +[https://www.joyent.com/blog/node-js-advisory-board](https://www.joyent.com/blog/node-js-advisory-board). + +Please help us improve this draft by sending your comments and feedback to +[governance@nodejs.org](mailto:governance@nodejs.org). + +The source for this document can be found [in this +repository](https://github.com/nodejs/nodejs.org/blob/master/locale/en/about/advisory-board/index.md). diff --git a/locale/uk/about/advisory-board/members.md b/locale/uk/about/advisory-board/members.md new file mode 100755 index 0000000000000..49619eb63cef1 --- /dev/null +++ b/locale/uk/about/advisory-board/members.md @@ -0,0 +1,65 @@ +--- +layout: about.hbs +title: Члени консультативної ради +--- +# Члени консультативної ради + +## Bert Belder + + * StrongLoop, Inc. + +## Danese Cooper + + * Expert in Open Source Communities + +## Kevin Decker + + * Walmart + +## TJ Fontaine + + * Joyent + +## Dav Glass + + * Yahoo + +## Scott Hammond + + * Joyent + +## Cian Ó Maidín + + * nearForm + +## Todd M. Moore + + * IBM + +## Gianugo Rabellino + + * Microsoft Open Technologies, Inc. + +## Issac Roth + + * StrongLoop, Inc. + +## Chris Saint-Amant + + * Netflix + +## Isaac Schlueter + + * npm + +## Dan Shaw + + * NodeSource + +## Erik Toth + + * PayPal + +## Chris Williams + + * Emerging Technology Advisors diff --git a/locale/uk/about/governance.md b/locale/uk/about/governance.md new file mode 100755 index 0000000000000..e3bc10c8a2aae --- /dev/null +++ b/locale/uk/about/governance.md @@ -0,0 +1,123 @@ +--- +title: Управління проектом +layout: about.hbs +--- +# Управління проектом + +## Технічний керівний комітет + +Цей проект спільно керується Технічним керівним комітетом +(Technical Steering Committee (TSC)), що є відповідальним +за найвищий рівень координування проектом. + +TSC має остаточне авторство над цим проектом, включаючи: + +* технічне спрямування; +* управління проектом та процесом (включаючи цю політику); +* політику співпраці; +* хостинг GitHub–репозиторіїв; +* керівництва з поведінки; +* підтримку списку додаткових співаторів. + +В першу чергу запрошення на участь у TSC були дані тим особам, які +були активними учасникам та мають значний досвід в управлінні проектом. +Членство передбачає повну зайнятість, відповідно до потреб проекту. + +Поточний список учасників TSC можна знайти в +[README.md](https://github.com/nodejs/node/blob/master/README.md#tsc-technical-steering-committee) проекту. + +## Співавтори + +GitHub–репозиторій [nodejs/node](https://github.com/nodejs/node) +підтримується TSC та додатковими співавторами, що були додані +TSC на постійній основі. + +Особи, що роблять значні та важливі внески стають співавторами +та отримують доступ на запис (commit-access) у проект. Вони +ідентифікуються через TSC і їх залучення як співавторів +обговорюється протягом щотижневих зустрічей TSC. + +_Зауважте:_ Якщо ви зробили значні внески і надання вам доступу на запис +не було розглянуто, відкрийте issue або зв’яжіться безпосередньо з членом TSC, +щоб вашу кандидатуру розглянули на наступній зустрічі TSC. + +Модифікації контенту в репозиторії nodejs/node відбуваються на +співавторській основі. Будь–хто з GitHub–аккаунтом може запропонувати +зміни через пул-реквест, який розглянуть співавтори пректу. +Всі пул–реквести повинні пройти ревізію та приймаються співавторами, які мають достатній досвід і можуть взяти відповідальність за ці зміни. +У випадку, якщо пулл–реквест пропонується існуючи співавтором, вимагається +первірка іншого співавтора. Слід шукати консенсусу, якщо інший співавтор +брав участь і виникли розбіжності стосовно конкретної зміни. Дивіться +_Процес пошуку консенсусу_ нижче для додаткоих деталей стосовно +консенсусної моделі, що використовується в управлінні. + +Співавтори можуть винести на обговорення на TSC значні або суперечливі зміни, +або модифікації, що не знайшли консенсусу шляхом присвоєння пул–ревкесту, +або issue тегу ***tsc-agenda***. TSC має підготувати остаточне рішення, +за потреби. + +Щоб побачити поточний список співавторів перегляньте +[README.md](https://github.com/nodejs/node/blob/master/README.md#current-project-team-members) проекту. + +Керівництво для співавторів знаходиться у +[COLLABORATOR_GUIDE.md](https://github.com/nodejs/node/blob/master/COLLABORATOR_GUIDE.md). + +## Членство в TSC + +Однак, очікується, що буде від 6 жо 12 учасників, збалансованих +на здатності ефективно ухвалювати рішення для забезпечення адекватного +покриття у важливих областях. + +У цих правилах немає особливих вимог чи бажаного рівня кваліфікації +для членства у TSC. + +TSC може додавати додаткових членів у TSC за стандартною схемою TSC. + +Член TSC може бути виключеним з TSC через добровільну відставку, +або за стандартною схемою TSC. + +Зміни у членстві в TSC слід публікувати у порядку денному. +Вони можуть висуватись як і будь–який інший пукт порядку денного +(дивіться "Зустрічі TSC" нижче). + +Не більш як 1/3 від всіх членів TSC можуть бути пов’язані одним роботодавцем. +Якщо виключення чи реєстрація нового члена TSC, або зміна місця роботи +поточного члена TSC, створює ситуацію за якої третина всіх учаників TSC +мають спільного роботодавця, ця ситуація має негайно вирішитись шляхом +реєстрації, або видалення одного, або більше учасників TSC, +які пов'язані різними роботодавцями. + +## Зустрічі TSC + +TSC щотижня зустрічається вживу через Google Hangout. +Зустріч відбувається під керівництвом модератора, назначеного TSC. +Кожну зустріч слід публікувати на YouTube. + +Елемени, які додаються до порядку денного TSC, які вважаються суперечливими: +зміна управління, політики внесків, членство TSC або процес релізів. + +Порядок денний не має ставити за мету прийняти або розглянути всі питання. +Це має продовжуватись на GitHub за участі великої кількості співавторів. + +Будь–який член спільноти або учасник може попросити додати щось до порядку денного наступної зустрічі через GitHub Issue. Будь–який співавтор, +член TSC або модератор може додати це питання до порядку денного додавши до відповідної issue тег ***tsc-agenda***. + +Перед кожною зустріччю TSC модератор поширює порядок денний між членами TSC. +Члени TSC можуть додавати до порядку денного будь–які питання на початку +кожної зустрічі. Модератор TSC не може накладати вето або вилучати питання. + +TSC може запрошувати до участі без права голосу осіб, що презентують певні проекти. Ці запрошення на разі: + +* Представник [збірки](https://github.com/node-forward/build) + обраний цим проектом. + +Модератор відповідальний за підсумки дискусії стосовно кожного з пунктів порядку денного та надсилання їх у вигляді пул–реквесту піля зустрічі. + +## Процес пошуку консенсусу + +TSC дотримується моделі, що приймає рішення, які базуються на +[пошуку консенсусу](http://en.wikipedia.org/wiki/Consensus-seeking_decision-making). + +Коли пункт порядку денного досягає консенсусу, модератор запитує: "Хто–небудь має заперечення?" — це є останнім закликом до відходу від консенсусу. + +Якщо пункт порядку денного не досягає консенсусу, член TSC може закликати до заключного голосування або голосування щодо перенесення питання до наступної зустрічі. Заклик до голосування має бути затверджений більшістю у TSC, інакше дискусія має продовжуватись. Проста більшість виграє. diff --git a/locale/uk/about/index.md b/locale/uk/about/index.md new file mode 100755 index 0000000000000..d866fbc82158f --- /dev/null +++ b/locale/uk/about/index.md @@ -0,0 +1,70 @@ +--- +layout: about.hbs +title: Про проект +trademark: Торгова марка +--- +# Про Node.js® + +Як асинхронне подієве JavaScript–оточення, Node спроектований для побудови +масштабованих мережевих додатків. У нижче наведений приклад "hello world", який +може одночасно обробляти багато з’єднань. Для кожного з’єднання викликається +функція зворотнього виклику, проте коли з’єднань немає Node засинає. + +```javascript +const http = require('http'); + +const hostname = '127.0.0.1'; +const port = 3000; + +const server = http.createServer((req, res) => { + res.statusCode = 200; + res.setHeader('Content-Type', 'text/plain'); + res.end('Hello World\n'); +}); + +server.listen(port, hostname, () => { + console.log(`Server running at http://${hostname}:${port}/`); +}); +``` + +Це контрастує з більш загальною моделлю в якій використовуються паралельні OS +потоки. Такий підхід є відносно неефективним та дуже важким у використанні. +Більше того, користувачі Node можуть не турбуватись про блокування процесів, +оскільки немає жодних блокувань. Майже жодна з функцій у Node +не працює напряму з I/O, тому процес не блокується ніколи. Оскільки нічого +не блокується на Node легко розробляти масштабовані системи. + +Якщо щось у цьому підході є незрозумілим для вас, то можете переглянути +повну статтю [Blocking vs Non-Blocking][]. + +--- + +Node створений під впливом таких систем як [Event Machine][] в Ruby або +[Twisted][] в Python. Node використовує подієву модель значно ширше, +він приймає [цикл подій (event loop)][] за основу оточення, замість того, +щоб використовувати його в якості бібліотеки. В інших системах завжди стається +блокування виклику, щоб запустити цикл подій. + +Зазвичай поведінка визначається через функції зворотнього виклику на початку +скрипта і в кінці запускає сервер через блокуючий виклик, +як от `EventMachine::run()`. В Node немає нічого подібного на виклик початку +циклу подій. Node просто входить в подієвий цикл після запуску скрипта на +виконання. Node виходить з подієвого циклу тоді, коли не залишається +зареєстрованих функцій зворотнього виклику. Така поведінка схожа на поведінку +браузерного JavaScript: подієвий цикл прихований від користувача. + +HTTP є об'єктом першого роду в Node, розробленим з потоковістю та малою затримкою. Це робить Node хорошою основою для веб–бібліотеки або фреймворку. + +Те що Node спроектований без багатопоточності, не означає, що ви не можете +використовувати можливості кількох ядер у вашому середовищі. Ви можете +створювати дочірні процеси, якими легко керувати з допомогою API +[`child_process.fork()`][]. Модуль [`cluster`][] побудований на цьому +інтерфейсі і дозволяє вам ділитись сокетами між процесами та +розподіляти навантаження між ядрами. + +[Blocking vs Non-Blocking]: https://github.com/nodejs/node/blob/master/doc/topics/blocking-vs-non-blocking.md +[`child_process.fork()`]: https://nodejs.org/api/child_process.html#child_process_child_process_fork_modulepath_args_options +[`cluster`]: https://nodejs.org/api/cluster.html +[event loop]: https://github.com/nodejs/node/blob/master/doc/topics/the-event-loop-timers-and-nexttick.md +[Event Machine]: http://rubyeventmachine.com/ +[Twisted]: http://twistedmatrix.com/ diff --git a/locale/uk/about/organization.md b/locale/uk/about/organization.md new file mode 100755 index 0000000000000..32006a0235935 --- /dev/null +++ b/locale/uk/about/organization.md @@ -0,0 +1,40 @@ +--- +layout: about.hbs +title: Організація +--- +# Організація + +## Співавтори і Технічний керівний комітет + +Проект Node.js спонсорується Node.js Foundation і підтримується окремими +учасниками. Технічний керівний комітет (Technical Steering Committee (TSC)) +складається з ключових учасників, що продемонстрували як свої спеціальні +технічні знання і практичну підтримку, так і сприяння потсупальному розвиткові +проекту та спільноті. + +Ви можете дізнатись більше про те, як стати учасником та членом TSC у +[розділі для учасників](/contribute/). + +### Поточні учасники + +Наразі, проект Node.js має понад 300 учасників, що активно працюють над різними +частинами проекту. Поточний список учасників можна знайти у +[профілі проекту на GitHub](https://github.com/orgs/nodejs/people). + +### Поточні члени Технічного керівного комітету + +* Alexis Campailla ([orangemocha](https://github.com/orangemocha)) +* Ben Noordhuis ([bnoordhuis](https://github.com/bnoordhuis)) +* Bert Belder ([piscisaureus](https://github.com/piscisaureus)) +* Brian White ([mscdex](https://github.com/mscdex)) +* Chris Dickinson ([chrisdickinson](https://github.com/chrisdickinson)) +* Colin Ihrig ([cjihrig](https://github.com/cjihrig)) +* Fedor Indutny ([indutny](https://github.com/indutny)) +* James M Snell ([jasnell](https://github.com/jasnell)) +* Jeremiah Senkpiel ([Fishrock123](https://github.com/Fishrock123)) +* Julien Gilli ([misterdjules](https://github.com/misterdjules)) +* Michael Dawson ([mhdawson](https://github.com/mhdawson)) +* Rod Vagg ([rvagg](https://github.com/rvagg)) +* Shigeki Ohtsu ([shigeki](https://github.com/shigeki)) +* Steven R Loomis ([srl295](https://github.com/srl295)) +* Trevor Norris ([trevnorris](https://github.com/trevnorris)) diff --git a/locale/uk/about/releases.md b/locale/uk/about/releases.md new file mode 100755 index 0000000000000..84409b7bc5576 --- /dev/null +++ b/locale/uk/about/releases.md @@ -0,0 +1,81 @@ +--- +layout: about.hbs +title: Релізи +--- +# Релізи + +Основна команда визначила дорожню карту, з якою знайома спільнота Node.js. +Релізи відбуваються так часто, наскільки це необхідно і практично, проте не +раніше, ніж роботу буде закінчено. +Багів не уникнути, проте випуск релізів переважати преред впевненістю щодо коректної роботи програмного забезпечення. Високі стандарти якості є одним з +ключових пріоритетів проекту Node.js. + +## Патчі + +Патч–релізи: + +- включають виправлення багів, покращення безпеки та швидкодії; +- не додають і не змінюють публічних інтерфейсів; +- не змінюють очікуваної поведінки даного інтерфейсу; +- можуть виправляти поведінку, якщо вона не вказана в документації; +- не вносять зміни, які унеможливлюють безшовні оновлення. + +## Мінори + +Мінорні релізи: + +- включають доповнення, або розширення API та підсистем; +- загалом не змінюють API і не вводять зміни, що ламають зворотну сумісність, окрім тих, які неможливо уникнути; +- є, здебільшого, доповнюючими релізами. + +## Мажори + +Мажорні релізи: + +- зазвичай вводять зміни, що ламають зворотну сумісність; +- визначають API Node.js для підтримки у найближчому майбутньому; +- вимагають обговорень, обережності, співправці між командою та користувачами. + +## Оцінка функціональності + +Команда може додавати новий функціонал та APІ у Node.js якщо: + +- зрозумілі потреби; +- API або функціонал має відомих споживачів; +- API є чистим, корисним та зручним для використання. + +При реалізації функціональних можливостей ядра Node.js, команда або спільнота +може визначити інший API нижчого рівня, які могли б бути корисні поза Node.js. +Після визначення Node.js може виставити його для споживачів. + +Наприклад, розглянемо інтерфейс [`EventEmitter`]. Потреба мати модель для підписки на події у модулях ядра є зрозумілою і ця абстракція може використовуватись поза ядром Node.js. Це не той випадок, коли інтерфейс не може бути реалізований поза Node.js, навпаки Node.js потребує цієї абстракції для себе і відкриває її для користувачів Node.js. + +Окрім того, може бути таке, що частина спільноти прийме шаблон +для обробки деяких загальних потреб, який не задовільнятиме Node.js. +Зрозуміло, що Node.js має за замовчуванням постачати API +та функціонал для всіх користувачів Node.js. +Інший можливий випадок полягає у додаткових скомпільованих розширеннях, +які важко поширювати на різних оточеннях. З огляду на це, +Node.js може включати ці зміни безпосередньо. + +Основна команда не приймає легковажно рішення стосовно внесення +нового API у Node.js. Node.js надає зворотній сумісності високого пріоритету. +Таким чином, спільнота має прийняти та обговорити зміни перед тим, +як команда прийматиме ці зміни. Навіть якщо API добре підходить змінам, +команда мусить визначити потенційних споживачів. + +## Визначення застарілості + +У деяких випадках команда мусить визнати деякий функціонал та API +в Node.js застарілими. Перед прийняттям будь–якого остаточного рішення, +команда мусить визначити користувачів API і те, +як вони його використовують. Ось деякі запитання до них: + +- Якщо це API широко використовується спільнотою, чому ми маємо визначати його застарілим? +- Чи маємо ми API для заміни, чи зрозумілим є шлях переходу? +- Як довго API має залишатись застарілим перед видаленням? +- Чи існує якийсь зовнішній модуль, яким можна легко його замінити? + +Команда має проявляти таку ж обережність при визначенні APІ застарілим, як і при додаванні нового APІ в Node.js API. + +[`EventEmitter`]: https://nodejs.org/api/events.html#events_class_eventemitter diff --git a/locale/uk/about/resources.md b/locale/uk/about/resources.md new file mode 100755 index 0000000000000..cf9ab934992b8 --- /dev/null +++ b/locale/uk/about/resources.md @@ -0,0 +1,37 @@ +--- +layout: about.hbs +title: Лого та графіка +--- +# Ресурси + +## Завантаження лого + +Будь ласка, прочитайте [політику товарного знаку](/about/trademark/) щодо дозволеного використання логотипів та позначень Node.js®. + +Правила візуального оформлення позначень Node.js описані у +[Visual Guidelines](/static/documents/foundation-visual-guidelines.pdf). + + + + + + + + + + + + + + + + + + +
[![Node.js на світлому фоні](/static/images/logos/nodejs-new-pantone-black.png)](/static/images/logos/nodejs-new-pantone-black.ai)[![Node.js на темному фоні](/static/images/logos/nodejs-new-pantone-white.png)](/static/images/logos/nodejs-new-pantone-white.ai)
[Node.js звичайний AI](/static/images/logos/nodejs-new-pantone-black.ai)[Node.js інвертований AI](/static/images/logos/nodejs-new-pantone-white.ai)
[![Node.js на світлому фоні](/static/images/logos/nodejs-new-black.png)](/static/images/logos/nodejs-new-black.ai)[![Node.js на темному фоні](/static/images/logos/nodejs-new-white.png)](/static/images/logos/nodejs-new-white.ai)
[Node.js звичайний з меншою кількістю кольорів AI](/static/images/logos/nodejs-new-black.ai)[Node.js інвертований з меншою кількістю кольорів AI](/static/images/logos/nodejs-new-white.ai)
+ +## Фони для робочого столу + +![Screensavers](/static/images/logos/monitor.png) + +Оберіть розширення вашого екрану: [1024 x 768](/static/images/logos/nodejs-1024x768.png) | [1280 x 1024](/static/images/logos/nodejs-1280x1024.png) | [1440 x 900](/static/images/logos/nodejs-1440x900.png) | [1920 x 1200](/static/images/logos/nodejs-1920x1200.png) | [2560 x 1440](/static/images/logos/nodejs-2560x1440.png) diff --git a/locale/uk/about/trademark.md b/locale/uk/about/trademark.md new file mode 100755 index 0000000000000..169eb0c0cb4a5 --- /dev/null +++ b/locale/uk/about/trademark.md @@ -0,0 +1,28 @@ +--- +layout: about.hbs +title: Політика торгової марки +--- +# Політика торгової марки + +The Node.js trademarks, service marks, and graphics marks are symbols of the +quality, performance, and ease of use that people have come to associate with +the Node.js software and project. To ensure that the Node.js marks continue to +symbolize these qualities, we must ensure that the marks are only used in ways +that do not mislead people or cause them to confuse Node.js with other software +of lower quality. If we don’t ensure the marks are used in this way, it cannot +only confuse users, it can make it impossible to use the mark to protect +against people who maliciously exploit the mark in the future. The primary goal +of this policy is to make sure that this doesn’t happen to the Node.js mark, so +that the community and users of Node.js are always protected in the future. + +At the same time, we’d like community members to feel comfortable spreading the +word about Node.js and participating in the Node.js community. Keeping that +goal in mind, we’ve tried to make the policy as flexible and easy to understand +as legally possible. + +Please read the [full policy](/static/documents/trademark-policy.pdf). +If you have any questions don't hesistate to +[email us](mailto:trademark@nodejs.org). + +Guidelines for the visual display of the Node.js mark are described in +the [Visual Guidelines](/static/documents/foundation-visual-guidelines.pdf). diff --git a/locale/uk/about/working-groups.md b/locale/uk/about/working-groups.md new file mode 100755 index 0000000000000..4a8ef01523306 --- /dev/null +++ b/locale/uk/about/working-groups.md @@ -0,0 +1,341 @@ +--- +layout: about.hbs +title: Робочі групи +--- +# Working Groups + +There are 2 types of Working Groups: + +* [Top-Level Working Groups](#top-level-working-groups) +* [Core Working Groups](#core-working-groups) + +## Top-Level Working Groups + + + +Top-Level Working Groups are created by the +[Technical Steering Committee (TSC)](https://github.com/nodejs/TSC#top-level-wgs-and-tlps). + +### Current Top-Level Working Groups + +* [Inclusivity](#inclusivity) + +#### [Inclusivity](https://github.com/nodejs/inclusivity) + +The Inclusivity Working Group seeks to increase inclusivity and diversity for +the Node.js project: + +* Increasing inclusivity means making the Node.js project a safe and friendly +place for people from diverse backgrounds. +* Increasing diversity means actively onboarding people from diverse backgrounds +to the Node.js project and maintaining their participation. + +Its responsibilites are: + +* Foster a welcoming environment that ensures participants are valued and can +feel confident contributing or joining discussions, regardless of any [aspect of +their identity](https://github.com/nodejs/inclusivity/#list-of-responsibilities). +* Proactively seek and propose concrete steps the project can take to increase +inclusivity. +* Serve as a resource for the development and enforcement of workflows that +protect community members and projects from harassment and abuse. +* Acknowledge and celebrate existing diversity accomplishments within the project +while seeking to build upon them. +* Identify ways to measure diversity and inclusivity within the project and report +them at regular intervals. + +# Core Working Groups + + + +Core Working Groups are created by the +[Core Technical Committee (CTC)](https://github.com/nodejs/node/blob/master/GOVERNANCE.md#core-technical-committee). + + +## Current Working Groups + +* [Website](#website) +* [Streams](#streams) +* [Build](#build) +* [Tracing](#tracing) +* [i18n](#i18n) +* [Evangelism](#evangelism) +* [Roadmap](#roadmap) +* [Docker](#docker) +* [Addon API](#addon-api) +* [Benchmarking](#benchmarking) +* [Post-mortem](#post-mortem) +* [Intl](#intl) +* [HTTP](#http) +* [Documentation](#documentation) +* [Testing](#testing) + + +### [Website](https://github.com/nodejs/nodejs.org) + +The website working group's purpose is to build and maintain a public +website for the `Node.js` project. + +Its responsibilities are: + +* Develop and maintain a build and automation system for `nodejs.org`. +* Ensure the site is regularly updated with changes made to `Node.js` like +releases and features. +* Foster and enable a community of translators. + +### [Streams](https://github.com/nodejs/readable-stream) + +The Streams WG is dedicated to the support and improvement of the Streams API +as used in Node.js and the npm ecosystem. We seek to create a composable API that +solves the problem of representing multiple occurrences of an event over time +in a humane, low-overhead fashion. Improvements to the API will be driven by +the needs of the ecosystem; interoperability and backwards compatibility with +other solutions and prior versions are paramount in importance. Our +responsibilities include: + +* Addressing stream issues on the Node.js issue tracker. +* Authoring and editing stream documentation within the Node.js project. +* Reviewing changes to stream subclasses within the Node.js project. +* Redirecting changes to streams from the Node.js project to this project. +* Assisting in the implementation of stream providers within Node.js. +* Recommending versions of readable-stream to be included in Node.js. +* Messaging about the future of streams to give the community advance notice of changes. + + +### [Build](https://github.com/nodejs/build) + +The build working group's purpose is to create and maintain a +distributed automation infrastructure. + +Its responsibilities are: + +* Produce Packages for all target platforms. +* Run tests. +* Run performance testing and comparisons. +* Creates and manages build-containers. + + +### [Tracing](https://github.com/nodejs/tracing-wg) + +The tracing working group's purpose is to increase the +transparency of software written in Node.js. + +Its responsibilities are: + +* Collaboration with V8 to integrate with `trace_event`. +* Maintenance and iteration on AsyncWrap. +* Maintenance and improvements to system tracing support (DTrace, LTTng, etc.) +* Documentation of tracing and debugging techniques. +* Fostering a tracing and debugging ecosystem. + +### i18n + +The i18n working groups handle more than just translations. They +are endpoints for community members to collaborate with each +other in their language of choice. + +Each team is organized around a common spoken language. Each +language community might then produce multiple localizations for +various project resources. + +Their responsibilities are: + +* Translations of any Node.js materials they believe are relevant to their +community. +* Review processes for keeping translations up +to date and of high quality. +* Social media channels in their language. +* Promotion of Node.js speakers for meetups and conferences in their +language. + +Note that the i18n working groups are distinct from the [Intl](#Intl) working group. + +Each language community maintains its own membership. + +* [nodejs-ar - Arabic (اللغة العربية)](https://github.com/nodejs/nodejs-ar) +* [nodejs-bg - Bulgarian (български език)](https://github.com/nodejs/nodejs-bg) +* [nodejs-bn - Bengali (বাংলা)](https://github.com/nodejs/nodejs-bn) +* [nodejs-zh-CN - Chinese (中文)](https://github.com/nodejs/nodejs-zh-CN) +* [nodejs-cs - Czech (Český Jazyk)](https://github.com/nodejs/nodejs-cs) +* [nodejs-da - Danish (Dansk)](https://github.com/nodejs/nodejs-da) +* [nodejs-de - German (Deutsch)](https://github.com/nodejs/nodejs-de) +* [nodejs-el - Greek (Ελληνικά)](https://github.com/nodejs/nodejs-el) +* [nodejs-es - Spanish (Español)](https://github.com/nodejs/nodejs-es) +* [nodejs-fa - Persian (فارسی)](https://github.com/nodejs/nodejs-fa) +* [nodejs-fi - Finnish (Suomi)](https://github.com/nodejs/nodejs-fi) +* [nodejs-fr - French (Français)](https://github.com/nodejs/nodejs-fr) +* [nodejs-he - Hebrew (עברית)](https://github.com/nodejs/nodejs-he) +* [nodejs-hi - Hindi (फिजी बात)](https://github.com/nodejs/nodejs-hi) +* [nodejs-hu - Hungarian (Magyar)](https://github.com/nodejs/nodejs-hu) +* [nodejs-id - Indonesian (Bahasa Indonesia)](https://github.com/nodejs/nodejs-id) +* [nodejs-it - Italian (Italiano)](https://github.com/nodejs/nodejs-it) +* [nodejs-ja - Japanese (日本語)](https://github.com/nodejs/nodejs-ja) +* [nodejs-ka - Georgian (ქართული)](https://github.com/nodejs/nodejs-ka) +* [nodejs-ko - Korean (한국어)](https://github.com/nodejs/nodejs-ko) +* [nodejs-mk - Macedonian (Mакедонски)](https://github.com/nodejs/nodejs-mk) +* [nodejs-ms - Malay (بهاس ملايو)](https://github.com/nodejs/nodejs-ms) +* [nodejs-nl - Dutch (Nederlands)](https://github.com/nodejs/nodejs-nl) +* [nodejs-no - Norwegian (Norsk)](https://github.com/nodejs/nodejs-no) +* [nodejs-pl - Polish (Język Polski)](https://github.com/nodejs/nodejs-pl) +* [nodejs-pt - Portuguese (Português)](https://github.com/nodejs/nodejs-pt) +* [nodejs-ro - Romanian (Română)](https://github.com/nodejs/nodejs-ro) +* [nodejs-ru - Russian (Русский)](https://github.com/nodejs/nodejs-ru) +* [nodejs-sv - Swedish (Svenska)](https://github.com/nodejs/nodejs-sv) +* [nodejs-ta - Tamil (தமிழ்)](https://github.com/nodejs/nodejs-ta) +* [nodejs-tr - Turkish (Türkçe)](https://github.com/nodejs/nodejs-tr) +* [nodejs-zh-TW - Taiwanese (Hō-ló)](https://github.com/nodejs/nodejs-zh-TW) +* [nodejs-uk - Ukrainian (Українська)](https://github.com/nodejs/nodejs-uk) +* [nodejs-vi - Vietnamese (Tiếng Việtnam)](https://github.com/nodejs/nodejs-vi) + +### [Intl](https://github.com/nodejs/Intl) + +The Intl Working Group is dedicated to support and improvement of +Internationalization (i18n) and Localization (l10n) in Node. Its responsibilities are: + +1. Functionality & compliance (standards: ECMA, Unicode…) +2. Support for Globalization and Internationalization issues that come up in the tracker +3. Guidance and Best Practices +4. Refinement of existing `Intl` implementation + +The Intl WG is not responsible for translation of content. That is the responsibility of the specific [i18n](#i18n) group for each language. + +### [Evangelism](https://github.com/nodejs/evangelism) + +The evangelism working group promotes the accomplishments +of Node.js and lets the community know how they can get involved. + +Their responsibilities are: + +* Project messaging. +* Official project social media. +* Promotion of speakers for meetups and conferences. +* Promotion of community events. +* Publishing regular update summaries and other promotional +content. + +### [HTTP](https://github.com/nodejs/http) + +The HTTP working group is chartered for the support and improvement of the +HTTP implementation in Node. It's responsibilities are: + +* Addressing HTTP issues on the Node.js issue tracker. +* Authoring and editing HTTP documentation within the Node.js project. +* Reviewing changes to HTTP functionality within the Node.js project. +* Working with the ecosystem of HTTP related module developers to evolve the + HTTP implementation and APIs in core. +* Advising the CTC on all HTTP related issues and discussions. +* Messaging about the future of HTTP to give the community advance notice of + changes. + +### [Roadmap](https://github.com/nodejs/roadmap) + +The roadmap working group is responsible for user community outreach +and the translation of their concerns into a plan of action for Node.js. + +The final [ROADMAP](https://github.com/nodejs/node/blob/master/ROADMAP.md) document is still +owned by the TC and requires the same approval for changes as any other project asset. + +Their responsibilities are: + +* Attract and summarize user community needs and feedback. +* Find or potentially create tools that allow for broader participation. +* Create Pull Requests for relevant changes to +[ROADMAP.md](https://github.com/nodejs/node/blob/master/ROADMAP.md) + + +### [Docker](https://github.com/nodejs/docker-node) + +The Docker working group's purpose is to build, maintain, and improve official +Docker images for the `Node.js` project. + +Their responsibilities are: + +* Keep the official Docker images updated in line with new `Node.js` releases. +* Decide and implement image improvements and/or fixes. +* Maintain and improve the images' documentation. + + +### [Addon API](https://github.com/nodejs/nan) + +The Addon API Working Group is responsible for maintaining the NAN project and +corresponding _nan_ package in npm. The NAN project makes available an +abstraction layer for native add-on authors for both Node.js and Node.js, +assisting in the writing of code that is compatible with many actively used +versions of Node.js, Node.js, V8 and libuv. + +Their responsibilities are: + +* Maintaining the [NAN](https://github.com/nodejs/nan) GitHub repository, + including code, issues and documentation. +* Maintaining the [addon-examples](https://github.com/nodejs/node-addon-examples) + GitHub repository, including code, issues and documentation. +* Maintaining the C++ Addon API within the Node.js project, in subordination to + the Node.js CTC. +* Maintaining the Addon documentation within the Node.js project, in + subordination to the Node.js CTC. +* Maintaining the _nan_ package in npm, releasing new versions as appropriate. +* Messaging about the future of the Node.js and NAN interface to give the + community advance notice of changes. + +The current members can be found in their +[README](https://github.com/nodejs/nan#collaborators). + +### [Benchmarking](https://github.com/nodejs/benchmarking) + +The purpose of the Benchmark working group is to gain consensus +for an agreed set of benchmarks that can be used to: + ++ track and evangelize performance gains made between Node releases ++ avoid performance regressions between releases + +Its responsibilities are: + ++ Identify 1 or more benchmarks that reflect customer usage. + Likely need more than one to cover typical Node use cases + including low-latency and high concurrency ++ Work to get community consensus on the list chosen ++ Add regular execution of chosen benchmarks to Node builds ++ Track/publicize performance between builds/releases + +### [Post-mortem](https://github.com/nodejs/post-mortem) + +The Post-mortem Diagnostics working group is dedicated to the support +and improvement of postmortem debugging for Node.js. It seeks to +elevate the role of postmortem debugging for Node, to assist in the +development of techniques and tools, and to make techniques and tools +known and available to Node.js users. + +Its responsibilities are: + ++ Defining and adding interfaces/APIs in order to allow dumps + to be generated when needed ++ Defining and adding common structures to the dumps generated + in order to support tools that want to introspect those dumps + +### [Documentation](https://github.com/nodejs/docs) + +The Documentation working group exists to support the improvement of Node.js +documentation, both in the core API documentation, and elsewhere, such as the +Node.js website. Its intent is to work closely with Evangelism, Website, and +Intl working groups to make excellent documentation available and accessible +to all. + +Its responsibilities are: + +* Defining and maintaining documentation style and content standards. +* Producing documentation in a format acceptable for the Website WG to consume. +* Ensuring that Node's documentation addresses a wide variety of audiences. +* Creating and operating a process for documentation review that produces + quality documentation and avoids impeding the progress of Core work. + +### [Testing](https://github.com/nodejs/testing) + +The Node.js Testing Working Group's purpose is to extend and improve testing of +the Node.js source code. + +It's responsibilities are: + +* Coordinating an overall strategy for improving testing. +* Documenting guidelines around tests. +* Working with the Build Working Group to improve continuous integration. +* Improving tooling for testing. diff --git a/locale/uk/blog/advisory-board/advisory-board-update.md b/locale/uk/blog/advisory-board/advisory-board-update.md new file mode 100755 index 0000000000000..16e4f9de83c8e --- /dev/null +++ b/locale/uk/blog/advisory-board/advisory-board-update.md @@ -0,0 +1,108 @@ +--- +title: Advisory Board Update +date: 2014-12-03T18:00:00.000Z +author: Timothy J Fontaine +slug: advisory-board-update +layout: blog-post.hbs +--- + +A lot has been happening in Node.js, so I wanted bring everyone up to date on +where we are with regards to the advisory board, its working groups, and the +release of v0.12. + +The interim [advisory +board](https://www.joyent.com/blog/node-js-advisory-board) has met three times +since its creation. You can find the minutes from the advisory board meetings +here: [https://nodejs.org/en/about/advisory-board/](https://nodejs.org/en/about/advisory-board/). As +we have more meetings and minutes, we will announce the dates and times for +those meeting and their minutes here on the blog. The next meeting is this +Thursday December 4th, at 1:30PM PST. We're looking to collect as much feedback +and input from as many representatives of the community as we can, so it's +important that we keep everyone up to date as much as possible. + +The interim advisory board has been working through a series of topics (in +general meetings as well as working groups) to further hone the scope of the +board, as well as define the structure that the advisory board will use to +conduct its meetings. Everyone on the board wants to make sure we're being as +transparent as possible, so let me describe how things operate so far. The +board is using a traditional two conference call structure, a public portion +that is recorded and open for anyone to join, and a private portion that is +only for board members. + +The public portion is meant to provide an update of what happened in the +previous meeting, as well as the status of action items from the previous +meeting. At the end of each public session is a open comment section, where +listeners are able to ask questions and the advisory board can respond. + +Following the public portion the board dials into the private conference, +further discussion happens during this time around specific agenda items, +working groups providing updates, and facilitating conversations about those +topics. These conversations are open and frank, and their content is recorded +in the minutes. Those minutes are then published a few days after the meeting +in the GitHub repository +[https://github.com/joyent/nodejs-advisory-board](https://github.com/joyent/nodejs-advisory-board), +as well as on the website +[https://nodejs.org/en/about/advisory-board/](https://nodejs.org/en/about/advisory-board/). + +There are a few working groups so far, for instance one is focused on making +sure the membership of the board is representative of the community Node.js +serves. While the board was initially bootstrapped with its existing +membership, we want to quickly move to a model that fully represents our +community. We want the board to represent the broadest spectrum of our +community, that also enables the board to move swiftly and make progress. + +Another working group is having a conversation about governance. This includes +topics like what is the team that makes decisions for Node.js, how do you +become a member of that team, how does that team set the roadmap for the +project, and how does that team makes decisions. + +One thing that we all agree on, is that we're not going to be using the +Benevolent Dictator model. In fact, recently the project hasn't been operating +that way. We can be more clear about that in our +[documentation](https://nodejs.org/en/about/organization). We all agree we want +a healthy and vibrant team, a team focused on making progress for Node.js, not +for progress's sake, but for the betterment of the software project and the +community we serve. We also agree that this means that there should be +consensus among the team. The conversation has been fruitful, and is on going, +we're continuing to work through the finer points of how much consensus we +need. + +I want to take a moment to describe what consensus means in this context. The +consensus model is about accountability. Accountability for the changes being +integrated into the project, accountability for documentation, and +accountability for releases. While members of the team are responsible for +subsystems or features of Node.js, everyone reviews each others changes. They +make sure to understand the impact on their relevant responsibilities. + +The goal of the team, especially that of the project lead, is to drive +consensus and ensure accountability. This means asking critical questions and +being able to answer them specifically and succinctly, for example: + + * What are we trying to solve with this change? + * Does this change effectively solve for this problem? + * Does this API have a consumer? + * Does this API reach the broadest amount of use cases? + * Is this API supportable? + * Does this change have adverse effects on other subsystems or use cases (and is that acceptable)? + * Does this change have tests that verify its operation, now and in the future? + * Does this change pass our style guidelines? + * Does this change pass our integration tests for the matrix of our supported configurations? + - For instance: ia32 and x64 for Windows, Linux, OSX, SmartOS + +These are just some of the questions, and while the questions are not unusual +or unique to Node.js, they are still important. + +Finally, we are very close to releasing v0.12, there's only one major patch +we're waiting to land. Once that's done we'll be releasing v0.11.15 as a +release candidate. Assuming no severe issues are filed against v0.11.15 we will +be going live with v0.12 about two weeks after the v0.11.15 release. + +If you have questions for the advisory board you can email +[advisoryboard@nodejs.org](mailto:advisoryboard@nodejs.org) or file an issue on +its repository +[https://github.com/joyent/nodejs-advisory-board](https://github.com/joyent/nodejs-advisory-board). +Thanks for all of your continued contributions to Node.js, in the form of +[filing issues](https://github.com/joyent/node/issues), [submitting pull +requests](https://github.com/joyent/node/pulls), and publishing your modules. +Node.js is lucky to have such an enthusiastic and engaged community, and we're +excited to be working with you on the future of Node.js. diff --git a/locale/uk/blog/advisory-board/index.md b/locale/uk/blog/advisory-board/index.md new file mode 100755 index 0000000000000..40847a7869637 --- /dev/null +++ b/locale/uk/blog/advisory-board/index.md @@ -0,0 +1,6 @@ +--- +title: Advisory Board +layout: category-index.hbs +listing: true +robots: noindex, follow +--- diff --git a/locale/uk/blog/advisory-board/listening-to-the-community.md b/locale/uk/blog/advisory-board/listening-to-the-community.md new file mode 100755 index 0000000000000..a56801cdcac7d --- /dev/null +++ b/locale/uk/blog/advisory-board/listening-to-the-community.md @@ -0,0 +1,22 @@ +--- +title: Listening to the Community +date: 2014-12-05T21:30:00.000Z +author: Advisory Board +slug: listening-to-the-community +layout: blog-post.hbs +--- + +We assembled the Node.js Advisory Board (AB) to listen to the community and +make the necessary changes to have a unified direction for Node.js, a +passionate group of developers, a vibrant ecosystem of product and service +providers, and a satisfied user base. Over the last month we have made great +progress on an open governance model, API standards, IP management, and +transparency to ensure the project is community-driven. These efforts +explicitly target helping resolve conflicts and with the goal of moving the +community forward together. It is important that we understand voices of +dissent and frustration and work together to build the greater ecosystem. We +are committed to this goal. + +Node.js remains the trusted platform that users rely on for creative projects +and to drive business goals. The v0.12 release will ship shortly and the +project team is already engaged in discussions about the next release. diff --git a/locale/uk/blog/announcements/apigee-rising-stack-yahoo.md b/locale/uk/blog/announcements/apigee-rising-stack-yahoo.md new file mode 100755 index 0000000000000..6a3b421fc00f0 --- /dev/null +++ b/locale/uk/blog/announcements/apigee-rising-stack-yahoo.md @@ -0,0 +1,38 @@ +--- +title: Apigee, RisingStack and Yahoo Join the Node.js Foundation +date: 2015-12-08T12:00:00.000Z +status: publish +category: Annoucements +slug: apigee-rising-stack-yahoo +layout: blog-post.hbs +--- + +> New Silver Members to Advance Node.js Growth and Enterprise Adoption + +**NODE.JS INTERACTIVE 2015, PORTLAND, OR.** — [The Node.js Foundation](https://nodejs.org/en/foundation/), a community-led and industry-backed consortium to advance the development of the Node.js platform, today announced Apigee, RisingStack and Yahoo are joining the Foundation as Silver Members to build and support the Node.js platform. With over 2 million downloads per month, Node.js is the runtime of choice for developers building everything from enterprise applications to Industrial IoT. + +The Node.js Foundation members work together alongside the community to help grow this diverse technology for large financial services, web-scale, cloud computing companies, and more. The newly added [Long-Term Support](https://nodejs.org/en/blog/release/v4.2.0/) release version 4.0 is just one of the many initiatives from the Foundation, which addresses the needs of enterprises that are using Node.js in more complex production environments, and signals the growing maturity of the technology. + +“We continue to welcome new Node.js Foundation members that are committed to providing the financial and technical resources needed to ensure the technology continues to evolve, while nurturing the community and ecosystem at the same time,” said Danese Cooper, Chairperson of the Node.js Foundation Board. “We are excited to have Apigee, RisingStack, and Yahoo on board to help grow the diversity of the platform and the community.” + +The new members are joining just in time for the inaugural Node.js Interactive event taking place today and tomorrow in Portland, OR. The conference focuses on frontend, backend and IoT technologies, and the next big initiatives for the Node.js Foundation. It includes more than 50 tutorials, sessions and keynotes. To stream the event, go to [http://events.linuxfoundation.org/events/node-interactive/program/live-video-stream](http://events.linuxfoundation.org/events/node-interactive/program/live-video-stream). + +More information about the newest Node.js Foundation members: + +[Apigee](https://apigee.com/about/) provides an intelligent API platform for digital businesses. Headquartered in San Jose, California, Apigee’s software supports some of the largest global enterprises. Developers can use the Node.js software platform to build highly customized application programming interfaces (APIs) and apps in the [Apigee API management platform](http://apigee.com/about/products/api-management). The integration of the Node.js technology allows developers to use code to create specialized APIs in Apigee, while utilizing the huge community of JavaScript developers. + +“We want to provide to the developer community the best platform for building today’s modern apps and APIs,,” said Ed Anuff, executive vice president of strategy at Apigee. “We are committed to the advancement of Node.js and look forward to continuing to utilize the strengths and further possibilities of the technology. The Node.js Foundation provides an excellent place for us to help push this technology to become even better for our developers that use it everyday.” + +[RisingStack](https://risingstack.com/) was founded in 2014 by Gergely Nemeth and Peter Marton as a full stack Javascript consulting company. It provides help with digital transitioning to Node.js and offers a microservice monitoring tool called [Trace](http://trace.risingstack.com/). RisingStack also contributes to several open source projects, and engages the developer community via a popular JavaScript/DevOps [engineering blog](https://blog.risingstack.com/), with a tremendous amount of long reads. + +“Node.js is extremely important in Javascript development, and we have experienced a rapid rise of interest in the technology from enterprises.” said Gergely Nemeth, CEO and Co-Founder of RisingStack. “Our business was established to support this growing technology, and we are very excited to join the Node.js Foundation to help broaden this already active community and continue its growth through open governance.” + +Yahoo is a guide focused on informing, connecting and entertaining its users. By creating highly personalized experiences for its users, Yahoo keeps people connected to what matters most to them, across devices and around the world. In turn, Yahoo creates value for advertisers by connecting them with the audiences that build their businesses. + +“Joining the Node.js Foundation underscores our deep appreciation for the Node.js community, and our commitment to drive its health and growth,” said Preeti Somal, vice president of engineering, Yahoo. “As a technology pioneer with a deep legacy of Javascript expertise and a strong commitment to open source, we saw the promise of Node.js from the start and have since scaled to become one of the industry’s largest deployments. We embrace Node.js’s evolution and encourage our developers to be contributing citizens of the Open Source community.” + +Additional Resources +* Learn more about the [Node.js Foundation](https://nodejs.org/en/foundation/) and get involved with [the project](https://nodejs.org/en/get-involved/). + +About Node.js Foundation +Node.js Foundation is a collaborative open source project dedicated to building and supporting the Node.js platform and other related modules. Node.js is used by tens of thousands of organizations in more than 200 countries and amasses more than 2 million downloads per month. It is the runtime of choice for high-performance, low latency applications, powering everything from enterprise applications, robots, API engines, cloud stacks and mobile websites. The Foundation is made up of a diverse group of companies including Platinum members Famous, IBM, Intel, Joyent, Microsoft, PayPal and Red Hat. Gold members include GoDaddy, NodeSource and Modulus/Progress Software, and Silver members include Apigee, Codefresh, DigitalOcean, Fidelity, Groupon, nearForm, npm, RisingStack, Sauce Labs, SAP, StrongLoop (an IBM company), YLD!, and Yahoo. Get involved here: [http://nodejs.org](https://nodejs.org/en/). diff --git a/locale/uk/blog/announcements/appdynamics-newrelic-opbeat-sphinx.md b/locale/uk/blog/announcements/appdynamics-newrelic-opbeat-sphinx.md new file mode 100755 index 0000000000000..e6a4321c43ae6 --- /dev/null +++ b/locale/uk/blog/announcements/appdynamics-newrelic-opbeat-sphinx.md @@ -0,0 +1,44 @@ +--- +title: AppDynamics, New Relic, Opbeat and Sphinx Join the Node.js Foundation as Silver Members +date: 2016-03-09T21:00:00.000Z +category: Annoucements +slug: appdynamics-newrelice-opbeat-sphinx +layout: blog-post.hbs +--- + +> Foundation Announces Dates for Node.js Interactive Conferences in Amsterdam and Austin, Texas + +SAN FRANCISCO, Mar. 9, 2016 — The [Node.js Foundation](https://nodejs.org/en/foundation/), a community-led and industry-backed consortium to advance the development of the Node.js platform, today announced AppDynamics, New Relic, Opbeat and Sphinx are joining the Foundation as Silver Members to continue to sustain and grow the Node.js platform. + +Many of the new members are within the application performance management industry, both established and up-and-coming vendors. Application performance management is an essential part of any infrastructure and there is a need across public, private and hybrid clouds to ensure that current and future products offer next-generation application performance with Node.js as a core component to the stability and potential of these offerings. + +The new members have the opportunity to support a range of Foundation activities such as new training programs and user-focused events like the new, ongoing [Node.js Live](http://live.nodejs.org/) series and Node.js Interactive events. Expanding to Europe this year, Node.js Interactive will take place September 15-16 in Amsterdam, Netherlands; while the North America event will be held November 29-30, 2016, in Austin, Texas. More information on the conference to come. + +Node.js has grown tremendously in the last year with a total of [3.5 million](https://www.npmjs.com/) users and 100 percent year-over-year growth. It is becoming ubiquitous across numerous industries from financial services to media companies, and is heavily used by frontend, backend and IoT developers. + +“While the popularity of Node.js has dramatically increased in recent years, the Foundation is committed to maintaining a stable, neutral and transparent ground to support continuation of the technology’s growth,” said Danese Cooper, Chairperson of the Node.js Foundation Board. “We are pleased to have AppDynamics, New Relic, Opbeat and Sphinx join the Foundation to help support both continued expansion for the technology and stability needs of the community.” + +More About the New Members: + +[AppDynamics](http://www.appdynamics.com/) is the application intelligence company that provides real-time insights into application performance, user experience, and business outcomes with cloud, on-premises, and hybrid deployment flexibility. Seeing the growing popularity of Node.js as a platform for building fast and scalable web and mobile applications, AppDynamics created a Node.js monitoring solution built on their core APM platform. The solution helps customers monitor Node.js applications in real-time and diagnose performance bottlenecks while running in live production or development environments. + +“Node.js is clearly taking off, and we’ve seen significant adoption of the platform in production for quite some time now, especially within the enterprise. We have participated in multiple Node.js events in the past, and look forward to continuing to support the longevity of this project, which is important to the developers that we serve,” said AppDynamics Chief Technology Officer and Senior Vice President of Product Management, Bhaskar Sunkara. + +[New Relic](https://newrelic.com/) is a software analytics company that delivers real-time insights and helps companies securely monitor their production software in virtually any environment, without having to build or maintain dedicated infrastructure. New Relic’s agent helps pinpoint Node.js application performance issues across private, public or hybrid cloud environments. + +“We're seeing huge growth in our Node.js application counts on a daily basis, from customers of all sizes - there's just as much interest from the Fortune 100 as there is from new startups. New Relic's engineers have been contributing to Node.js's core development for years, and we're excited to help accelerate its advancement and success even further by supporting the Node.js Foundation," said Tim Krajcar, Engineering Manager, Node.js Agent, New Relic. + +[Opbeat](https://opbeat.com/) provides next-generation performance insights, specifically built for JavaScript developers. Opbeat maps production issues to the developers that write the code, leading to faster debugging and more coding. The young company recently launched [full support for Node.js](https://opbeat.com/nodejs/). + +“We’re seeing massive interest in Opbeat within the Node community - from larger organizations to smaller start-ups - so we’re excited to join the Foundation to help support the community. At the end of the day, our customers are developers and we want to contribute to the increased popularity of Node amongst developers and CTOs,” said Rasmus Makwarth, Co-Founder and CEO of Opbeat. + +[Sphinx](http://sphinx.sg/) was established in 2014 by experience Vietnamese developers from Silicon Valley with the aim to become the leading company on Node.js and the MEAN stack — the group has co-founded the Vietnamese Node.js and Angular.js communities. The consulting team helps take large-scale applications from the concept phase to production for some of the largest global enterprises and government departments. + +“Becoming a silver member is a breakthrough for us, and gives us the opportunity to establish long-lasting relationships with companies that also share a common interest in the rapidly growing Node.js technology. We look forward to collaborating with other Foundation members and continuing to develop and support the open source community,” said Hai Luong, CEO and Co-Founder of Sphinx. + +About Node.js Foundation +Node.js is used by tens of thousands of organizations in more than 200 countries and amasses more than 3.5 million active users per month. It is the runtime of choice for high-performance, low latency applications, powering everything from enterprise applications, robots, API engines, cloud stacks and mobile websites. + +The Foundation is made up of a diverse group of companies including Platinum members Famous, IBM, Intel, Joyent, Microsoft, PayPal and Red Hat. Gold members include GoDaddy, NodeSource and Modulus/Progress Software, and Silver members include Apigee, AppDynamics, Codefresh, DigitalOcean, Fidelity, Groupon, nearForm, New Relic, npm, Opbeat, RisingStack, Sauce Labs, SAP, StrongLoop (an IBM company), Sphinx, YLD!, and Yahoo!. Get involved here: [http://nodejs.org](http://nodejs.org/). + +The Node.js Foundation is a Linux Foundation Project, which are independently funded software projects that harness the power of collaborative development to fuel innovation across industries and ecosystems. [www.linuxfoundation.org](http://www.linuxfoundation.org/) diff --git a/locale/uk/blog/announcements/foundation-advances-growth.md b/locale/uk/blog/announcements/foundation-advances-growth.md new file mode 100755 index 0000000000000..4c11558e5c7a8 --- /dev/null +++ b/locale/uk/blog/announcements/foundation-advances-growth.md @@ -0,0 +1,53 @@ +--- +title: Node.js Foundation Advances Platform with More Than Three Million Users +date: 2015-12-08T12:00:00.000Z +status: publish +category: Annoucements +slug: foundation-advances-growth +layout: blog-post.hbs +--- + +> Node.js Platform Stronger Than Ever with New Node.js Foundation Members, +Community Contributions, and 100 Percent Year-Over-Year User Growth + +**NODE.JS INTERACTIVE 2015, PORTLAND, OR.** — [The Node.js Foundation](https://nodejs.org/en/foundation/), a community-led and industry-backed consortium to advance the development of the Node.js platform, today is announcing major community, code and membership growth, adoption statistics of the technology at large, and the Foundation’s new incubation program. + +The Node.js Foundation was founded in 2015 to accelerate the development of Node.js and support the large ecosystem that it encompasses through open governance. As part of this mission, the Foundation announced its first incubation project libuv. Libuv is a software library that provides asynchronous event notification and improves the Node.js programming experience. The project is both critical to Node.js and already widely used, making it a natural fit for the Foundation. Under the Foundation's umbrella, it will receive additional support and mentorship. + +The first Node.js Interactive event unites more than 700 developers, engineers, system architects, DevOps professionals and users representing a wide range of projects, products and companies in Portland, Ore. Node.js Interactive brings together a broad range of speakers to help experienced and novice Node.js users alike learn tips, best practices, new skills, as well as gain insight into future developments for the technology. With Node.js being used in 98% of the Fortune 500 companies regularly, the event will also highlight the maturation of the technology within enterprises. + +Attendees have the opportunity to see and learn more about how organizations like Capital One, GoDaddy, Intel, NodeSource, npm and Uber are using Node.js to meet their innovation needs. Attendees are also getting a first look at Node.js advancements announced and demoed this week including: + +* JOYENT: Joyent is announcing the 2016 Node.js Innovation Program, which provides Node.js expertise, marketing support and free cloud or on-premise infrastructure to start-ups and teams within larger enterprises that are driving innovation powered by Node.js. The 2015 program included [bitHound](https://www.bithound.io/), who will speak at Node.js Interactive about innovative approaches to identifying risk and priorities in dependencies and code.  More info can be found [here](https://www.joyent.com/innovation). +

Joyent also released a new Node.js, Docker and NoSQL reference [architecture](https://www.joyent.com/blog/how-to-dockerize-a-complete-application) that enables microservices in seconds. To learn more, the company will be demoing this at booth number 7. + +* IBM is featuring multiple Node.js based solutions for: a complete API lifecycle via StrongLoop Arc and Loopback.io; real-time location tracking in Node using Cloudant®  data services; how to write Node applications against Apache Spark; and end-to-end mobile applications using IBM MobileFirst -- all running on Bluemix®, IBM's Cloud Platform. + +* INTEL: A leader in the Internet of Things, Intel will be demoing a SmartHouse at booth number 0013. Based on the [IoTivity](https://www.iotivity.org/) open source project, which is sponsored by the [Open Interconnect Consortium (OIC)](http://openinterconnect.org/), the SmartHouse includes a home gateway from MinnowBoard Max client, three Edison controlled LEDs, fan, motion sensor, and smoke detector. Intel developed Node.js binding for IoTivity to power the demo with everything being controlled from a WebGL 3D virtual house interface. + +* NEARFORM: nearForm is holding a Node Expert Clinic for attendees who are looking for advice on Node.js adoption or struggling with any existing problems. Individuals will be connected to experts including Matteo Collina, Colin Ihrig, Wyatt Preul and Peter Elger for 30 minute sessions which can be arranged at the conference. +

In addition, the company is sharing real customer successes and adoption statistics of Node.js at large. The company gathered the data from 100 of their Node customers across the globe. The leading industries in implementation and adoption of Node.js include enterprise software companies and media companies. Financial, payment, travel, e-commerce and IoT tie for third in industries that are leading in both adoption and implementation. +

Startups are leading the way in adding Node.js into their strategy, but in 2013 and 2014 larger incumbents started to transition their stacks with Node.js as a core technology, notable names include PayPal, Condé Nast, and Costa. In terms of startup saturation: + + * 25% of developers at growth-stage companies in enterprise software are using Node.js; + * 25% of developers at FinTech startups are using Node.js; + * Healthcare startups are using Node.js in a significant way - an average of 33% of developers are using Node.js with the primary use-case to enable rapid-innovation; + * 48% of developers are using Node.js at IoT companies; + * 80% of developers at education startups are using the technology. + +* NODESOURCE: The company will showcase [N|Solid](https://nodesource.com/products/nsolid) for Enterprise-grade Node.js platform. It extends the capability of Node.js to provide increased developer productivity, protection of critical applications, and peak application performance. +

The company will also have [free upgrade tools](https://marketing.nodesource.com/acton/fs/blocks/showLandingPage/a/15680/p/p-001f/t/page/fm/4) available to help developers implement the latest Long Term Support version, v4. This version is essential for enterprises and companies using Node.js in more complex environments as it is the most stable and highly security code from the platform. + +* SYNCHRO LABS: Silver Node.js Interactive sponsor Synchro Labs announced the launch of Synchro platform, a new tool that allows enterprise developers to create high quality, high performance, cross-platform native mobile applications using the Node.js framework. The company is demoing the new platform at the conference booth 2. More information on the recent announcement [here](https://synchro.io/launch). + +### Community and Code Growth +Since the independent Node.js Foundation launched earlier this year, development progress continues to accelerate with dramatic increases in contributions to the project. In the past eight months, the community has grown from 8 to more than 400 contributors, with first-time contributors as high as 63 percent per month. There are more than 3 million active users of Node.js, which has increased 100% year over year. + +Currently the core Node.js repository includes 52 committers, while there’s been more than 709 contributors over time. Node.js core had 77 active contributors in October alone, with 46 of those being first-time contributors. More than 400 developers have commit rights to some part of the Node.js project currently. The community is focused on creating a new type of open source contribution philosophy called participatory governance, which liberalizes contribution policies and provides direct ownership to contributors. + +In addition, the Foundation announced three new Silver members to the team, Apigee, RisingStack, and Yahoo. You can find details of the new membership here. + +#### About Node.js Foundation + +Node.js Foundation is a collaborative open source project dedicated to building and supporting the Node.js platform and other related modules. Node.js is used by tens of thousands of organizations in more than 200 countries and amasses more than 2 million downloads per month. It is the runtime of choice for high-performance, low latency applications, powering everything from enterprise applications, robots, API engines, cloud stacks and mobile websites. The Foundation is made up of a diverse group of companies including Platinum members Famous, IBM, Intel, Joyent, Microsoft, PayPal and Red Hat. Gold members include GoDaddy, NodeSource and Modulus/Progress Software, and Silver members include Apigee, Codefresh, DigitalOcean, Fidelity, Groupon, nearForm, npm, RisingStack, Sauce Labs, SAP, StrongLoop (an IBM company), YLD!, and Yahoo!. Get involved here: [http://nodejs.org](http://nodejs.org). + diff --git a/locale/uk/blog/announcements/foundation-elects-board.md b/locale/uk/blog/announcements/foundation-elects-board.md new file mode 100755 index 0000000000000..974b432caf16d --- /dev/null +++ b/locale/uk/blog/announcements/foundation-elects-board.md @@ -0,0 +1,38 @@ +--- +title: Node.js Foundation Elects Board of Directors +date: 2015-09-04T21:00:00.000Z +status: publish +category: Annoucements +slug: foundation-elects-board +layout: blog-post.hbs +--- + +> New Foundation Committed to Accelerating Growth of the Node.js Platform Also Adds Marketing Chair and Community Manager + +SAN FRANCISCO, Sept. 4, 2015 – The [Node.js Foundation](https://nodejs.org/en/foundation/), a community-led and industry-backed consortium to advance the development of the Node.js platform, today announced key executives have been elected to its Board of Directors. The Board of Directors represents the broad Node.js community and will guide the Foundation as it executes on its mission to enable widespread adoption and help accelerate development of Node.js and other related modules. + +The Node.js Foundation board, which sets the business and technical direction as well as oversees IP management, marketing, and events on behalf of the organization, includes: + +* [Danese Cooper](https://www.linkedin.com/in/danesecooper), chairman of the board, distinguished member of technical staff - open source at PayPal; +* [Scott Hammond](https://www.linkedin.com/pub/scott-hammond/1/a4b/92a), vice-chairman of the board, chief executive officer at Joyent; +* [Brian McCallister](https://www.linkedin.com/in/brianmccallister), silver-level director of the board, chief technology officer of platforms at Groupon; +* [Todd Moore](https://www.linkedin.com/pub/todd-moore/2b/540/798), board member, vice president of open technology at IBM; +* [Steve Newcomb](https://www.linkedin.com/in/stevenewcomb), board member, founder and chief executive officer at Famous Industries; +* [Gianugo Rabellino](https://www.linkedin.com/in/gianugo), secretary of the board, senior director of open source programs at Microsoft; +* [Charlie Robbins](https://www.linkedin.com/in/charlierobbins), gold-level director of the board, director of engineering at GoDaddy.com; +* [Imad Sousou](https://www.linkedin.com/pub/imad-sousou/6/b49/2b8), board member, vice president and general manager at Intel; +* [Rod Vagg](https://www.linkedin.com/in/rvagg), technical steering committee chairperson, chief node officer at NodeSource. + +In addition to formalizing the board, [Bill Fine](https://www.linkedin.com/pub/bill-fine/2/497/916), vice president of product and marketing at Joyent, was elected as the marketing chairperson. The Linux Foundation also hired [Mikeal Rogers](https://www.linkedin.com/in/mikealrogers) as its community manager to help support and guide the new organization. + +“The new board members represent the diversity of the Node.js community and the commitment that these companies have to supporting its overall efforts,” said Danese Cooper, chairman of the board, Node.js Foundation. “Node.js is incredibly important to the developer ecosystem and is increasingly relevant for building applications on devices that are changing the pace of commerce. The board will work to support and build the Node.js platform using the blueprint of an open governance model that is transparent and supportive of its community.” + +In early June, the Node.js and io.js developer community announced that they were merging their respective code base to continue their work in a neutral forum, the Node.js Foundation. The new leaders will help support the ongoing growth and evolution of the combined communities and will foster a collaborative environment to accelerate growth and the platform’s evolution. + +### About Node.js Foundation + +Node.js Foundation is a collaborative open source project dedicated to building and supporting the Node.js platform and other related modules. Node.js is used by tens of thousands of organizations in more than 200 countries and amasses more than 2 million downloads per month. It is the runtime of choice for high-performance, low latency applications, powering everything from enterprise applications, robots, API engines, cloud stacks and mobile websites. + +The Foundation is made up of a diverse group of companies including Platinum members Famous, IBM, Intel, Joyent, Microsoft and PayPal. Gold members include GoDaddy, NodeSource and Modulus/Progress Software, and Silver members include Apigee, Codefresh, DigitalOcean, Fidelity, Groupon, nearForm, npm, Sauce Labs, SAP, StrongLoop and YLD!. Get involved here: [https://nodejs.org](https://nodejs.org). + +The Node.js Foundation is a Collaborative Project at The Linux Foundation. Linux Foundation Collaborative Projects are independently funded software projects that harness the power of collaborative development to fuel innovation across industries and ecosystems. [www.linuxfoundation.org](http://www.linuxfoundation.org) diff --git a/locale/uk/blog/announcements/foundation-express-news.md b/locale/uk/blog/announcements/foundation-express-news.md new file mode 100755 index 0000000000000..c8a7c3d87cd09 --- /dev/null +++ b/locale/uk/blog/announcements/foundation-express-news.md @@ -0,0 +1,31 @@ +--- +title: Node.js Foundation to Add Express to its Incubator Program +date: 2016-02-10T21:00:00.000Z +category: Annoucements +slug: Express as Incubator Project +layout: blog-post.hbs +--- + +> Node.js Foundation to Add Express to its Incubator Program + +SAN FRANCISCO, Feb. 10, 2016 — The [Node.js Foundation](https://nodejs.org/en/foundation/), a community-led and industry-backed consortium to advance the development of the Node.js platform, today announced Express, the most popular Node.js web server framework, and some of its constituent modules are on track to become a new incubation project of the Foundation. + +With [53+ million downloads in the last two years](http://npm-stat.com/charts.html?package=express&author=&from=&to=), Express has become one of the key toolkits for building web applications and its stability is essential for many Node.js users, especially those that are just getting started with the platform. Express also underpins some of the most significant projects that support Node.js, including [kraken.js](http://krakenjs.com/), a secure and scalable layer that extends Express and is heavily used by enterprises. Kraken.js was open sourced [by PayPal in 2014](https://www.paypal-engineering.com/2014/03/03/open-sourcing-kraken-js/). It also underpins [Sails.js](http://sailsjs.org/), a web framework that makes it easy to build custom, enterprise-grade Node.js apps, and [Loopback](http://loopback.io/), a Node.js API framework. + +“This framework is critical to a significant portion of many Node.js users,” said Mikeal Rogers, Community Manager of the Node.js Foundation. “Bringing this project into the Node.js Foundation, under open governance, will allow it to continue to be a dependable choice for many enterprises and users, while ensuring that we retain a healthy ecosystem of competing approaches to solving problems that Express addresses.” + +"The work around developing and maintaining Express has been a tremendous asset to the community," said Rod Vagg, Chief Node Officer at NodeSource and Technical Steering Committee Director of the Node.js Foundation. "With 5 million package downloads in the last month, the stability of this project, that will get a huge boost through open governance, is very important to the efforts of the Node.js Foundation in supporting Node.js as a technology and developer ecosystem." + +“IBM is committed not only to growing and supporting the Node.js ecosystem, but to promoting open governance for the frameworks that enable Node.js developers to work smarter, faster and with more agility,” said Todd Moore, IBM, VP Open Technology. “We are thrilled that Express is being introduced as an incubated top level project of the Foundation. Express has a bright future and a new long term home that will ensure resources, reliability and relevancy of Express to the global Node.js developer community.” + +Assets related to Express are being contributed to the Node.js Foundation by IBM. + +The Node.js Foundation Incubator Program was launched last year. Projects under the Node.js Foundation Incubator Program receive assistance and governance mentorship from the Foundation's Technical Steering Committee and related working groups. The Incubator Program is intended to support the many needs of Node.js users to maintain a competitive and robust ecosystem. + +### About Node.js Foundation + +Node.js is used by tens of thousands of organizations in more than 200 countries and amasses more than 3 million active users per month. It is the runtime of choice for high-performance, low latency applications, powering everything from enterprise applications, robots, API engines, cloud stacks and mobile websites. + +The Foundation is made up of a diverse group of companies including Platinum members Famous, IBM, Intel, Joyent, Microsoft, PayPal and Red Hat. Gold members include GoDaddy, NodeSource and Modulus/Progress Software, and Silver members include Apigee, Codefresh, DigitalOcean, Fidelity, Groupon, nearForm, npm, RisingStack, Sauce Labs, SAP, StrongLoop (an IBM company), YLD!, and Yahoo!. Get involved here: http://nodejs.org. + +The Node.js Foundation is a Collaborative Project at The Linux Foundation. Linux Foundation Collaborative Projects are independently funded software projects that harness the power of collaborative development to fuel innovation across industries and ecosystems. [www.linuxfoundation.org](http://www.linuxfoundation.org) diff --git a/locale/uk/blog/announcements/foundation-v4-announce.md b/locale/uk/blog/announcements/foundation-v4-announce.md new file mode 100755 index 0000000000000..e2db55943a034 --- /dev/null +++ b/locale/uk/blog/announcements/foundation-v4-announce.md @@ -0,0 +1,45 @@ +--- +title: Node.js Foundation Combines Node.js and io.js Into Single Codebase in New Release +date: 2015-09-14T17:00:00.000Z +status: publish +category: Annoucements +slug: foundation-v4-announce +layout: blog-post.hbs +--- + +More Stability, Security, and Improved Test Coverage Appeals to Growing Number of Enterprises Using Node.js + +SAN FRANCISCO, Sept. 14, 2015 – The [Node.js Foundation](https://nodejs.org/en/foundation/), a community-led and industry-backed consortium to advance the development of the Node.js platform, today announced the release of Node.js version 4.0.0. A record number of individuals and companies helped to contribute to the release, which combines both the Node.js project and io.js project in a single codebase under the direction of the Node.js Foundation. + +Currently, Node.js is used by tens of thousands of organizations in more than 200 countries and amasses more than 2 million downloads per month. With major stability and security updates, a new test cluster, support for ARM processors and long-term support, Node.js v4 represents the latest framework innovation for enterprise users leveraging it to run JavaScript programs. + +Named version 4.0.0 because it includes major updates from io.js version 3.0.0, the new release also contains V8 v4.5, the same version of V8 shipping with the Chrome web browser today. This brings with it many bonuses for Node.js users, most notably a raft of new [ES6](https://nodejs.org/en/docs/es6/) features that are enabled by default including block scoping, classes, typed arrays (Node's Buffer is now backed by Uint8Array), generators, Promises, Symbols, template strings, collections (Map, Set, etc.) and new to V8 v4.5, arrow functions. + +Node.js v4 also brings a plan for [long-term support (LTS)](https://github.com/nodejs/LTS/) and a regular release cycle. Release versioning now follows the Semantic Versioning Specification, a specification for version numbers of software libraries and similar dependencies, so expect increments of both minor and patch version over the coming weeks as bugs are fixed and features are added. The LTS will support enterprise users that need more long-term requirements and continue the innovation and work with the V8 team to ensure that Node.js continues to evolve. + +"Under the Node.js Foundation, our unified community has made incredibly progress in developing a converged codebase,” said Mikeal Rogers, Community Manager of The Node.js Foundation. “We believe that the new release and LTS cycles allow the project to continue its innovation and adopt cutting-edge JavaScript features, while also serving the need for predictable long-term stability and security demanded by a growing number of enterprise users who are proudly adopting Node.js as a key technology.” + +Additional updates include: + +* **Stability and Security**: Key Node.js Foundation members, such as IBM, NodeSource and StrongLoop, contributed a strong enterprise-focus to the latest release. Their contributions make this latest version more stable and secure for enterprise needs. +* **Improved Platform Test Coverage**: With the assistance of some major partners, including RackSpace, DigitalOcean, Scaleway and ARM Holdings, the new release has built one of the most advanced testing clusters of any major open source project creating additional stability to the platform. +* **First-Class Coverage of ARM variants**: All major ARM variants, ARMv6, ARMv7, and the brand new 64-bit ARMv8, which is making major inroads in the server market, are supported as part of the test infrastructure. Developers who need to use these architectures for developing enterprise-ready and IoT applications are assured solid runtime. +* **Addition of Arrow Functions**: Node.js v4 now includes arrow functions, an addition that was not previously available even in io.js. + +The technical steering committee for the Node.js Foundation is now 15 members strong with 40 plus core committers and 350+ GitHub organization members contributing to the community. The development process and release cycles are much faster due to the large, active community united under the Node.js Foundation umbrella. The next release is planned before the end of 2015. In parallel, the project will be branching a new stable line of releases every six months, with one planned in October and another for spring of 2016. + +Additional Resources +* Technical Blog - [Node v4.0.0 (Stable)](https://nodejs.org/en/blog/release/v4.0.0/) +* New GitHub [home](https://github.com/nodejs/node) + +About Node.js Foundation +Node.js Foundation is a collaborative open source project dedicated to building and supporting the Node.js platform and other related modules. Node.js is used by tens of thousands of organizations in more than 200 countries and amasses more than 2 million downloads per month. It is the runtime of choice for high-performance, low latency applications, powering everything from enterprise applications, robots, API engines, cloud stacks and mobile websites. The Foundation is made up of a diverse group of companies including Platinum members Famous, IBM, Intel, Joyent, Microsoft and PayPal. Gold members include GoDaddy, NodeSource and Modulus/Progress Software, and Silver members include Apigee, Codefresh, DigitalOcean, Fidelity, Groupon, nearForm, npm, Sauce Labs, SAP, StrongLoop and YLD!. Get involved here: [http://nodejs.org](http://nodejs.org). +The Node.js Foundation is a Collaborative Project at The Linux Foundation. Linux Foundation Collaborative Projects are independently funded software projects that harness the power of collaborative development to fuel innovation across industries and ecosystems. [https://nodejs.org/en/foundation/](https://nodejs.org/en/foundation/) + +> Node.js Foundation is a licensed mark of Node.js Foundation. Node.js is a trademark of Joyent, Inc. and is used with its permission + +Media Contact +Node.js Foundation +Sarah Conway +978-578-5300 +pr@nodejs.org diff --git a/locale/uk/blog/announcements/index.md b/locale/uk/blog/announcements/index.md new file mode 100755 index 0000000000000..85cc53467b518 --- /dev/null +++ b/locale/uk/blog/announcements/index.md @@ -0,0 +1,6 @@ +--- +title: Announcements +layout: category-index.hbs +listing: true +robots: noindex, follow +--- diff --git a/locale/uk/blog/announcements/interactive-2015-keynotes.md b/locale/uk/blog/announcements/interactive-2015-keynotes.md new file mode 100755 index 0000000000000..1c5bf33104dfc --- /dev/null +++ b/locale/uk/blog/announcements/interactive-2015-keynotes.md @@ -0,0 +1,62 @@ +--- +title: Keynotes for Node.js Interactive 2015 Announced +date: 2015-11-20T09:00:00.000Z +status: publish +category: Annoucements +slug: interactive-2015-programming +layout: blog-post.hbs +--- + +> Keynotes from GoDaddy, IBM, NodeSource, Uber and More Featured At Inaugural Node.js Foundation Event December 8-9, 2015, in Portland, Ore. + +**SAN FRANCISCO, Nov. 20, 2015** – The [Node.js Foundation](https://nodejs.org/en/foundation/), a community-led and industry-backed consortium to advance the development of the Node.js platform, today announced the final keynotes and programming for [Node.js Interactive](http://events.linuxfoundation.org/events/node-interactive). The event will feature conversations and presentations on everything from the future of Node.js in IoT to collaborations between the community and the enterprise. + +Node.js is the runtime of choice for building mobile, web and cloud applications. The diversity of the technology and its capabilities are making it ubiquitous in almost every ecosystem from personal finance to robotics. To highlight changes with the platform and what’s to come, Node.js Interactive will focus on three tracks: Frontend, Backend and the Internet of Things (IoT). Highlights of these tracks available [here](https://nodejs.org/en/blog/announcements/interactive-2015-programming/); full track sessions [here](http://events.linuxfoundation.org/events/node-interactive/program/schedule). + +Node.js Interactive brings together a broad range of speakers to help experienced and novice Node.js users alike learn tips, best practices, new skills, as well as gain insight into future developments for the technology. + +2015 Node.js Interactive keynotes include: + +### Day 1, December 8, 2015 + +* Jason Gartner, Vice President, WebSphere Foundation and PureApplication Dev at IBM +* James Snell, Open Technologies at IBM, “Convergence: Evolving Node.js with Open Governance and an Open Community” +* Joe McCann, Co-Founder and CEO at NodeSource, “Enterprise Adoptions Rates and How They Benefit the Community” +* Ashley Williams, Developer Community and Content Manager at npm +* Tom Croucher, Engineer Manager at Uber, “Node.js at Uber” + +### Day 2, December 9, 2015 + +* Mikeal Rogers, Node.js Foundation Community Manager at The Linux Foundation, “Node.js Foundation Growth and Goals” +* Danese Cooper, Distinguished Member of Technical Staff - Open Source at PayPal and Node.js Foundation Chairperson +* Panel Discussion with Node.js Foundation [Technical Steering Committee members](https://nodejs.org/en/foundation/tsc/) + +In addition to keynotes, Node.js Foundation will have breakout sessions and panels discussing how Node.js is used in some of the largest and fastest growing organizations. + +**These include:** + +* Robert Schultz, Applications Architect at Ancestry +* Azat Mardan, Technology Fellow at Capital One +* Charlie Robbins, Director of Engineering UX Platform at GoDaddy +* Chris Saint-Amant, Director of UI Engineering at Netflix +* Kim Trott, Director of UI Platform Engineering at Netflix +* Bill Scott, VP of Next Generation Commerce at PayPal +* Panel - APIs in Node.js with GoDaddy, Symantec, and StrongLoop Inc. +* Panel - Node.js and Docker with Joyent, Ancestry and nearForm +* Panel - Node.js in the Media with Condé Nast, Mic and Bloomberg + +“Our list of speakers and breakout sessions are a great way to dive head first into Node.js, no matter if you are new to the platform or an expert,” said Mikeal Rogers, Community Manager, Node.js Foundation. “It is a great way for the community to come together, learn, share and better understand where the technology is heading in the future. The case studies, keynotes and breakout sessions showcased at the event shows how rapidly Node.js is growing in the enterprise.” + +Standard registration closes November 27, 2015, after which the conference price will increase from $425 to $525. To register visit [https://www.regonline.com/Register/Checkin.aspx?EventID=1753707](https://www.regonline.com/Register/Checkin.aspx?EventID=1753707). + +Node.js Interactive is made possible by Platinum sponsor IBM; Gold sponsors: Joyent, Microsoft, Modulus Inc., Red Hat; and Silver sponsors NodeSource, nearForm and npm and Synchro. + +### Additional Resources + +* Learn more about the [Node.js Foundation](https://nodejs.org/en/foundation/), and get involved with the [project](https://nodejs.org/en/get-involved/). +* Want to keep abreast of Node.js Foundation news? Sign up for our newsletter at the bottom of the [Node.js Foundation page](https://nodejs.org/en/foundation/). +* Follow on [Twitter](https://twitter.com/nodejs?ref_src=twsrc^google|twcamp^serp|twgr^author) and [Google+](https://plus.google.com/u/1/100598160817214911030/posts). + +About Node.js Foundation +Node.js Foundation is a collaborative open source project dedicated to building and supporting the Node.js platform and other related modules. Node.js is used by tens of thousands of organizations in more than 200 countries and amasses more than 2 million downloads per month. It is the runtime of choice for high-performance, low latency applications, powering everything from enterprise applications, robots, API engines, cloud stacks and mobile websites. The Foundation is made up of a diverse group of companies including Platinum members Famous, IBM, Intel, Joyent, Microsoft, PayPal and Red Hat. Gold members include GoDaddy, NodeSource and Modulus/Progress Software, and Silver members include Apigee, Codefresh, DigitalOcean, Fidelity, Groupon, nearForm, npm, Sauce Labs, SAP, and YLD!. Get involved here: [http://nodejs.org](http://nodejs.org). +The Node.js Foundation is a Collaborative Project at The Linux Foundation. Linux Foundation Collaborative Projects are independently funded software projects that harness the power of collaborative development to fuel innovation across industries and ecosystems. [https://nodejs.org/en/foundation/](https://nodejs.org/en/foundation/) diff --git a/locale/uk/blog/announcements/interactive-2015-programming.md b/locale/uk/blog/announcements/interactive-2015-programming.md new file mode 100755 index 0000000000000..c06a9ae0de877 --- /dev/null +++ b/locale/uk/blog/announcements/interactive-2015-programming.md @@ -0,0 +1,60 @@ +--- +title: Node.js Foundation Announces Programming For Node.js Interactive +date: 2015-10-20T17:00:00.000Z +status: publish +category: Annoucements +slug: interactive-2015-programming +layout: blog-post.hbs +--- + +> Inaugural Conference to Advance the Use of Node.js Within Backend, Frontend, IoT Applications + +SAN FRANCISCO, Oct. 20, 2015 – [The Node.js Foundation](https://nodejs.org/en/foundation/), a community-led and industry-backed consortium to advance the development of the Node.js platform, today announced initial programming for [Node.js Interactive](http://events.linuxfoundation.org/events/node-interactive). This inaugural event, which is being led by the newly formed Node.js Foundation in cooperation with the Linux Foundation, will be held December 8-9, 2015, in Portland, Ore. + +Node.js has become ubiquitous in almost every ecosystem in technology and is consistently being used more in mainstream enterprises. To continue to evolve the platform, Node.js Interactive brings together a wide range of community, projects, products and companies to create an educational and collaborative space. With more than 700 attendees expected, Node.js Interactive will provide a way to network with other developers and engineers within this diverse community. + +Node.js Interactive will also focus on three tracks: Frontend, Backend and the Internet of Things (IoT); talks for each track were selected in collaboration with track chairs [Jessica Lord](https://github.com/jlord/) (Frontend), [C J Silvero](https://github.com/ceejbot) (Backend) and [Kassandra Perch](https://github.com/nodebotanist) (IoT). A few highlights include: + +Frontend Session Highlights: +* JavaScript, For Science! *with* Max Ogden, Computer Programmer for Dat Project +* Making Your Node.js Applications Debuggable *with* Patrick Mueller, Senior Node Engineer at NodeSource +* Node Intl: Where We Are, What's Next *with* Steven Loomis, Senior Software Engineer at IBM +* Rapid Development of Data Mining Applications in Node.js *with* Blaz Fortuna, Research Consultant for Bloomberg L.P., Senior Researcher at Jožef Stefan Institute and Partner at Quintelligence +* Real-Time Collaboration Sync Strategies *with* Todd Kennedy, CTO of Scripto +* Rebuilding the Ship as It Sails: Making Large Legacy Sites Responsive *with* Philip James, Senior Software Engineer at Eventbrite + +Backend Session Highlights: +* Building and Engaging High-Performance Teams in the Node.js Ecosystem *with* Chanda Dharap, Director of Engineering at StrongLoop, an IBM company +* Microservice Developer Experience *with* Peter Elger, Director of Engineering at nearForm +* Modernizing Winston for Node.js v4 *with* Charlie Robbins, Director of Engineering UX Platform at GoDaddy +* Node.js API Pitfalls, Can You Spot Them? *with* Sam Roberts, Node/Ops Developer at StrongLoop, an IBM Company +* Node.js Performance Optimization Case Study *with* Bryce Baril, Senior Node Engineer at NodeSource +* Resource Management in Node.js *with* Bradley Meck, Software Engineer at NodeSource + +IoT Session Highlights: +* Contributing to Node Core *with* Jeremiah Senkpiel, Node Core Contributor at NodeSource +* Hands on Hardware Workshop *with* Tessel with Kelsey Breseman, Engineering Project Manager at 3D Robotics and Steering Committee Member and Board Co-Creator of Tessel Project +* Internet of Cats *with* Rachel White, Front-End Engineer for IBM Watson +* IoT && Node.js && You *with* Emily Rose, Senior Software Engineer at Particle IO +* Node.s Bots at Scale *with* Matteo Collina, Architect at nearForm +* Node.js Development for the Next Generation of IoT *with* Melissa Evers-Hood, Software Product Line Manager at Intel Corporation +* Node.js While Crafting: Make Textile to Compute! *with* Mariko Kosaka, Javascript Engineer at Scripto + +“Node.js has become pervasive within the last few years, with so many community accomplishments to highlight, including forming the new Node.js Foundation and the convergence of io.js and node.js,” said Mikeal Rogers, Community Manager, Node.js Foundation. “We created this conference to help showcase this growth, to accommodate the Node.js community’s many different needs, and to help accelerate adoption as it expands into enterprises.” + +Early bird registration ends October 23, 2015. Standard registration closes November 21, 2015, after which the conference price will increase from $425 to $525. Discounted hotel rates are also available until Wednesday, November 11, 2015. To register visit [https://www.regonline.com/Register/Checkin.aspx?EventID=1753707](https://www.regonline.com/Register/Checkin.aspx?EventID=1753707). + +Node.js Interactive is made possible by platinum sponsor IBM, gold sponsor Microsoft, and silver sponsors NodeSource and nearForm. + +Additional panels and keynotes will be announced in the coming weeks; to see the initial program visit: [http://nodejspdx2015.sched.org](http://nodejspdx2015.sched.org). For more information visit [http://events.linuxfoundation.org/events/node-interactive](http://events.linuxfoundation.org/events/node-interactive). + +Additional Resources + +Learn more about the [Node.js Foundation](https://nodejs.org/en/foundation/), and get involved with [the project](https://nodejs.org/en/get-involved/). +Want to keep abreast of Node.js Foundation news? Sign up for our newsletter at the bottom of the [Node.js Foundation page](https://nodejs.org/en/foundation/). +Follow on [Twitter](https://twitter.com/nodejs?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor) and [Google+](https://plus.google.com/u/1/100598160817214911030/posts). + +About Node.js Foundation
Node.js Foundation is a collaborative open source project dedicated to building and supporting the Node.js platform and other related modules. Node.js is used by tens of thousands of organizations in more than 200 countries and amasses more than 2 million downloads per month. It is the runtime of choice for high-performance, low latency applications, powering everything from enterprise applications, robots, API engines, cloud stacks and mobile websites. + +The Foundation is made up of a diverse group of companies including Platinum members Famous, IBM, Intel, Joyent, Microsoft, PayPal and Red Hat. Gold members include GoDaddy, NodeSource and Modulus/Progress Software, and Silver members include Apigee, Codefresh, DigitalOcean, Fidelity, Groupon, nearForm, npm, Rising Stack, Sauce Labs, SAP, and YLD!. Get involved here: [http://nodejs.org](http://nodejs.org). +The Node.js Foundation is a Collaborative Project at The Linux Foundation. Linux Foundation Collaborative Projects are independently funded software projects that harness the power of collaborative development to fuel innovation across industries and ecosystems. [https://nodejs.org/en/foundation/](https://nodejs.org/en/foundation/) diff --git a/locale/uk/blog/announcements/interactive-2015.md b/locale/uk/blog/announcements/interactive-2015.md new file mode 100755 index 0000000000000..f474a337e5675 --- /dev/null +++ b/locale/uk/blog/announcements/interactive-2015.md @@ -0,0 +1,26 @@ +--- +title: Node.js Interactive +date: 2015-09-10T17:00:00.000Z +status: publish +category: Annoucements +slug: interactive-2015 +layout: blog-post.hbs +--- +Are You Ready for Node.js Interactive? + +The Node.js Foundation is pleased to announce [Node.js Interactive](http://interactive.nodejs.org) happening from December 8-9, 2015 in Portland, OR. With node.js growing in all aspects of technology, the gathering will cover everything from streamlining development of fast websites and real-time applications to tips for managing node.js applications, and much more. + +The event will be the first of its kind under the Node.js Foundation led in cooperation with The Linux Foundation. Vendor-neutral by design, it will focus on the continued ideals of open governance collaboration between the now joined node.js and io.js community. The conference welcomes experienced developers as well as those interested in how node.js might be of use to their business with tracks that focus on IoT, front-end and back-end technologies. To curate these tracks and create the best experience for attendees, track chairs include seasoned veterans: + +* [Kassandra Perch](https://github.com/nodebotanist) for IoT, a software developer / evangelist / advocate / educator / roboticist living in Austin, TX, who you can follow at: [@nodebotanist](https://twitter.com/nodebotanist). +* [Jessica Lord](https://github.com/jlord/) for Front-End, a GitHub developer and designer who loves open source, JavaScript & node.js, and stories of Tudor England and is a Portland transplant. +* [C J Silverio](https://github.com/ceejbot) for Back-End, who is all node, all the time and works as VP of engineering at npm, Inc. in the Bay area. + +As the Node.js community continues to grow, the Node.js Foundation believes this event is the perfect place to continue to develop collaboration and better understand what’s next for this extremely popular technology. Interested in joining us? Register [here](http://events.linuxfoundation.org/events/node-interactive/attend/register). Timeline for discount rates are as follows: + +* Super Early Bird - US$200 for the 1st 100 tickets +* Early Bird - US$325, ends October 17 +* Standard - US$425, ends November 21 +* Late & Onsite - US$525, begins November 22 + +If you are interested in becoming a speaker, please check out our [Call For Participation](http://events.linuxfoundation.org/events/node-interactive/program/cfp) page for more details. Call for Participation closes on September 24, 2015. diff --git a/locale/uk/blog/announcements/nodejs-foundation-survey.md b/locale/uk/blog/announcements/nodejs-foundation-survey.md new file mode 100755 index 0000000000000..aa7aa45475d94 --- /dev/null +++ b/locale/uk/blog/announcements/nodejs-foundation-survey.md @@ -0,0 +1,119 @@ +--- +title: New Node.js Foundation Survey Reports New “Full Stack” In Demand Among Enterprise Developers +date: 2016-04-12T13:00:00.000Z +status: publish +category: Annoucements +slug: nodejs-foundation-survey +layout: blog-post.hbs +--- + +> Nearly 50 percent of Node.js developers surveyed using container technology, strong growth emerges in cloud, front end, mobile and devices + +**SAN FRANCISCO, April, 12, 2016** — [The Node.js Foundation](http://ctt.marketwire.com/?release=11G082331-001&id=8448115&type=0&url=https%3a%2f%2fnodejs.org%2fen%2ffoundation%2f), +a community-led and industry-backed consortium to advance the development of the Node.js +platform, today announced the availability of its first ever Node.js User Survey Report. + +With over 3.5 million users and an annual growth rate of 100 percent, Node.js is emerging as +a universal platform used for web applications, IoT, and enterprise. The Node.js User Survey +report features insights on emerging trends happening in this massive community that serves +as a leading indicator on trends like microservices architectures, real-time web applications, +Internet of Things (IoT). The report paints a detailed picture of the technologies that are +being used, in particular, with Node.js in production and language preferences (current and +future) for front end, back end and IoT developers. + +## Key findings from the Node.js Foundation survey + +### Node.js and Containers Take Off Together + +Both Node.js and containers are a good match for efficiently developing and deploying +microservices architectures. And, while the surge in container use is relatively new, **45 +percent of developers that responded to the survey use Node.js with the technology**. Other +container-related data points: + +* 58 percent of respondents that identified as IoT developers use Node.js with Docker. +* 39 percent of respondents that identified as back end developers use Node.js with Docker. +* 37 percent of respondents that identified as front end developers use Node.js with Docker. + +### Node.js — the Engine that Drives IoT + +JavaScript and Node.js have risen to be the language and platform of choice for IoT as both +are suited for data intensive environments that require parallel programming without +disruption. JavaScript, including Node.js and frameworks, such as React, have become the de +facto choice of developers working in these connected, device-driven environments with **96 +percent of IoT respondents indicating they use JavaScript/Node.js for development**. + +“Data about developer choices is catnip for developers,” said James Governor, RedMonk +co-founder. “In this survey, the Node.js Foundation identifies some interesting results, +notably about languages programmers are using alongside Node.js and IoT demographics.” + +These environments are challenging, and the survey revealed that on average, IoT developers +using Node.js have more experience than their front end and back end counterparts with more +than 40 percent of IoT developers surveyed having over 10+ years of development experience. + +Additionally, although Docker is a server technology, many IoT developers (58%) are using +Node.js with Docker compared to only 39 percent of back end developers. This metric is +significant as it means that the new IoT world also is quickly adopting containers and +microservices. + +### Node.js Becoming Universal Platform + +**The full stack is no longer “front end and back end,” but rather “front end, back end and +connected devices,”** which is a combination of everything from the browser to a toaster all +being run in JavaScript and enabled by Node.js. The survey revealed that 62 percent of +respondents are using Node.js for both front end and back end development, and nearly 10 +percent are using Node.js for front end, back end, and IoT development. + +### Node.js Pervasive in Enterprises + +Node.js is increasingly used in the enterprise, and used within huge enterprises like PayPal, +Go Daddy, Capital One, and Intel. The survey found: + +* **More than 45 percent already using the Node.js Long Term Support release (v4) geared +toward medium to large enterprise users who require stability and high performance.** +* Of those who haven’t upgraded, 80 percent report definite plans to upgrade to v4, with half +of respondents planning to do so this year. +* Strong interest in enterprise tooling among 34 percent of tech leaders. + +### Full “MEAN” Stack Explodes + +The popularity of real-time, social networking and interactive game applications is pushing a +new stack among developers. The MEAN stack is able to handle lots of concurrent connections +and extreme scalability, which these applications demand. Node.js, in combination with +MongoDB, Express, AngularJS, allows developers to tackle the needs of front end and back end +development. Not surprisingly, all of these technologies were commonly used alongside +Node.js. **Express, cited the most, is used by an average of 83 percent of developers**. + +### Popularity of JavaScript and Node.js + +JavaScript and Node.js were popular among back end, front end, and IoT developers. Other +languages, beyond JavaScript, that were popular for all three developer types included PHP, +Python and Java. However, when looking to the future, back end, front end and IoT developers +planned to decrease their use of Java, .Net and PHP (PHP averages a 15% decrease) and +increase the use of Python and C++. + +## About the Survey + +The survey was open for 15 days, from January 13 to January 28, 2016. During this time, 1,760 +people from around the world completed the survey. Seventy percent were developer's, 22 +percent technical management and 64 percent run Node.js in production. Geographic +representation of survey covered: 35 percent from United States, 22 percent from Continental +Europe, 6 percent India, and 6 percent from United Kingdom with the remaining respondents +hailing from Asia, Latin America, Africa, Russia and the Middle East. + +**Additional Resources:** +* [Node.js Foundation User survey infographic](/static/documents/2016-survey-infographic.png) +* [Report summarizing Node.js Foundation User Survey 2016](/static/documents/2016-survey-report.pdf) + +**About Node.js Foundation** + +Node.js is used by tens of thousands of organizations in more than 200 countries and amasses +more than 3 million active users per month. It is the runtime of choice for high-performance, +low latency applications, powering everything from enterprise applications, robots, API +engines, cloud stacks and mobile websites. + +The Foundation is made up of a diverse group of companies including Platinum members Famous, +IBM, Intel, Joyent, Microsoft, PayPal and Red Hat. Gold members include GoDaddy, NodeSource +and Modulus/Progress Software, and Silver members include Apigee, AppDynamics, Codefresh, +DigitalOcean, Fidelity, Google, Groupon, nearForm, New Relic, npm, Opbeat, RisingStack, Sauce +Labs, SAP, StrongLoop (an IBM company), Sphinx, YLD!, and Yahoo!. Get involved here: +[http://nodejs.org](http://nodejs.org). diff --git a/locale/uk/blog/announcements/v6-release.md b/locale/uk/blog/announcements/v6-release.md new file mode 100755 index 0000000000000..806d000451b61 --- /dev/null +++ b/locale/uk/blog/announcements/v6-release.md @@ -0,0 +1,80 @@ +--- +title: World’s Fastest Growing Open Source Platform Pushes Out New Release +date: 2016-04-26T12:00:00.000Z +status: publish +category: Annoucements +slug: v6-release +layout: blog-post.hbs +--- + +> New “Current” version line focuses on performance improvements, increased reliability and +better security for its 3.5 million users + +SAN FRANCISCO, April, 26, 2016 — [The Node.js Foundation](http://ctt.marketwire.com/?release=11G082331-001&id=8448115&type=0&url=https%3a%2f%2fnodejs.org%2fen%2ffoundation%2f), a +community-led and industry-backed consortium to advance the development of the Node.js +platform, today announced the release of Node.js version 6 (Node.js v6). This release +provides major performance improvements, increased reliability and better security. + +With over 3.5 million users and an annual growth rate of 100 percent, Node.js is emerging as +a universal platform used for web applications, IoT, mobile, enterprise application +development, and microservice architectures. The technology is ubiquitous across numerous +industries, from startups to Fortune 500 companies, and is the only unified platform that +full stack JavaScript developers can use for front end, back end, mobile and IoT projects. + +Performance improvements are key in this latest release with one of the most significant +improvements coming from module loading, which is currently four times faster than Node.js +version 4 (Node.js v4). This will help developers dramatically decrease the startup time of +large applications for the best productivity in development cycles and more seamless +experience with end users. In addition, Node.js v6 comes equipped with v8 JavaScript engine +5.0, which has improved ECMAScript 2015 (ES6) support. Ninety-three percent of +[ES6](http://node.green/) features are also now supported in the Node.js v6 release, up from +56 percent for Node.js v5 and 50 percent for Node.js v4. Key features from ES6 include: +default and rest parameters, destructuring, class and super keywords. + +Security is top-of-mind for enterprises and startups alike, and Node.js v6 has added several +features that impact security, making it easier to write secure code. The new Buffer API will +reduce the risk of bugs and vulnerabilities leaking into applications through a new +constructor method used to create Buffer instances, as well as a zero-fill-buffers +command-line flag. Using the new command line flag, developers can continue to safely use +older modules that have not been updated to use the new constructor API. In addition, V8 has +improved their implementation of Math.random() to be more secure — this feature is added into +Node.js v6. + +“The Node.js Project has done an incredible job of bringing this version to life in the +timeline that we initially proposed in September 2015. It’s important for us to continue to +deliver new versions of Node.js equipped with all the cutting-edge JavaScript features to +serve the needs of developers and to continue to improve the performance and stability +enterprises rely on,” said Mikeal Rogers, Community Manager of the Node.js Foundation. “This +release is committed to Long Term Support, which allows predictable long-term stability, +reliability, performance and security to the growing number of enterprise users that are +adopting Node.js as a key technology in their infrastructure.” + +To increase reliability of Node.js, there has been increased documentation and testing done +around Node.js v6 for enterprises that are using and looking to implement the platform. + +Node.js release versioning follows the Semantic Versioning Specification, a specification for +version numbers of software libraries similar to dependencies. Under the Node.js’ [Long-Term +Support (LTS)](https://github.com/nodejs/LTS/), version 6 is now the “Current” release line +while version 5 will be maintained for a few more months. In October 2016, Node.js v6 will +become the LTS release and the LTS release line (version 4) will go under maintenance mode in +April 2017, meaning only critical bugs, critical security fixes and documentation updates +will be permitted. Users should begin transitioning from v4 to v6 in October when v6 goes +into LTS. + +Additional Resources +* [Download version 6](https://nodejs.org/download/release/v6.0.0/) +* [Download version 4](https://nodejs.org/en/download/) +* [Technical blog with additional new features and updates](https://nodejs.org/en/blog/) + +About Node.js Foundation +Node.js is used by tens of thousands of organizations in more than 200 countries and amasses +more than 3.5 million active users per month. It is the runtime of choice for +high-performance, low latency applications, powering everything from enterprise applications, +robots, API engines, cloud stacks and mobile websites. + +The Foundation is made up of a diverse group of companies including Platinum members Famous, +IBM, Intel, Joyent, Microsoft, PayPal and Red Hat. Gold members include GoDaddy, NodeSource +and Modulus/Progress Software, and Silver members include Apigee, AppDynamics, Codefresh, +DigitalOcean, Fidelity, Google, Groupon, nearForm, New Relic, npm, Opbeat, RisingStack, Sauce +Labs, SAP, StrongLoop (an IBM company), Sphinx, YLD!, and Yahoo!. Get involved here: +[https://nodejs.org](https://nodejs.org). diff --git a/locale/uk/blog/announcements/welcome-google.md b/locale/uk/blog/announcements/welcome-google.md new file mode 100755 index 0000000000000..944d7b8b67d49 --- /dev/null +++ b/locale/uk/blog/announcements/welcome-google.md @@ -0,0 +1,18 @@ +--- +title: Welcome Google Cloud Platform! +date: 2016-03-29T13:00:00.000Z +status: publish +category: Annoucements +slug: welcome-google +layout: blog-post.hbs +--- + +Google Cloud Platform joined the Node.js Foundation today. This news comes on the heels of the Node.js runtime going into beta on [Google App Engine](https://cloudplatform.googleblog.com/2016/03/Node.js-on-Google-App-Engine-goes-beta.html), a platform that makes it easy to build scalable web applications and mobile backends across a variety of programming languages. + +In the industry, there’s been a lot of conversations around a third wave of cloud computing that focuses less on infrastructure and more on microservices and container architectures. Node.js, which is a cross-platform runtime environment that consists of open source modules, is a perfect platform for these types of environments. It’s incredibly resource-efficient, high performing and well-suited to scalability. This is one of the main reasons why Node.js is heavily used by IoT developers who are working with microservices environments. + +“Node.js is emerging as the platform in the center of a broad full stack, consisting of front end, back end, devices and the cloud,” said Mikeal Rogers, community manager of the Node.js Foundation. “By joining the Node.js Foundation, Google is increasing its investment in Node.js and deepening its involvement in a vibrant community. Having more companies join the Node.js Foundation helps solidify Node.js as a leading universal development environment.” + +Along with joining the Node.js Foundation, Google develops the V8 JavaScript engine which powers Chrome and Node.js. The V8 team is working on infrastructural changes to improve the Node.js development workflow, including making it easier to build and test Node.js on V8’s continuous integration system. Google V8 contributors are also involved in the Core Technical Committee. + +The Node.js Foundation is very excited to have Google Cloud Platform join our community and look forward to helping developers continue to use Node.js everywhere. diff --git a/locale/uk/blog/announcements/welcome-redhat.md b/locale/uk/blog/announcements/welcome-redhat.md new file mode 100755 index 0000000000000..f5256dd278f83 --- /dev/null +++ b/locale/uk/blog/announcements/welcome-redhat.md @@ -0,0 +1,31 @@ +--- +title: Node.js Foundation Welcomes Red Hat as Newest Platinum Member +date: 2015-10-06T12:30:00.000Z +status: publish +category: Annoucements +slug: welcome-redhat +layout: blog-post.hbs +--- + +# Node.js Foundation Welcomes Red Hat as Newest Platinum Member + +> Company Looks to Accelerate Node.js Adoption for Enterprise Software Development + +**SAN FRANCISCO, Oct. 6, 2015** – The [Node.js Foundation](https://nodejs.org/en/foundation/), a community-led and industry-backed consortium to advance the development of the Node.js platform, today announced Red Hat, Inc. has joined the Foundation as a Platinum member. Red Hat joins platinum members, including Famous, IBM, Intel, Joyent, Microsoft and PayPal, to provide support in the adoption, development and long-term success of the Node.js project. + +Node.js is the runtime of choice for high-performance, low latency applications, powering everything from enterprise applications to robots. Over the last two years, more large enterprises, including Red Hat, IBM, PayPal, Fidelity, and Microsoft, have adopted Node.js as part of their enterprise fabric. Today there are 2 million unique IP addresses installing Node.js packages and more than 2 billion package downloads in the last month. + +Often used for building fast, scalable network applications, Node.js supports Red Hat technologies such as [Red Hat Mobile Application Platform](https://www.redhat.com/en/technologies/mobile/application-platform), and is available in [OpenShift by Red Hat](https://www.openshift.com/) and [Red Hat Software Collections](http://developerblog.redhat.com/tag/software-collections/). As a new member, Red Hat is providing financial support, technical contributions, and high-level policy guidance for the newly formed Foundation that operates as a neutral organization to support the project governed by the Node.js community. + +“Node.js has become an important tool for developers who need to build and deploy a new generation of highly responsive, scalable applications for mobile and Internet of Things (IoT),” said Rich Sharples, senior director, Product Management at Red Hat. “We welcome deeper collaboration with the Node.js Foundation and broader community, and look forward to helping increase the role that the technology plays in enterprise software development.” + +“Node.js is exploding in popularity in almost every aspect of technology from microservices architecture to data-intensive applications that run across distributed devices,” said Danese Cooper, Chairperson of the Node.js Foundation Board. “It is a pivotal moment for the technology, and the support of Foundation members is imperative to ensure that Node.js stays relevant and addresses topical projects and problems that are happening within the wider Node.js community.” + +Additional Resources +* Learn more about the [Node.js Foundation](https://nodejs.org/en/foundation/) and get involved with [the project](https://nodejs.org/en/get-involved/). + +### About Node.js Foundation + +Node.js Foundation is a collaborative open source project dedicated to building and supporting the Node.js platform and other related modules. Node.js is used by tens of thousands of organizations in more than 200 countries and amasses more than 2 million downloads per month. It is the runtime of choice for high-performance, low latency applications, powering everything from enterprise applications, robots, API engines, cloud stacks and mobile websites. The Foundation is made up of a diverse group of companies including Platinum members Famous, IBM, Intel, Joyent, Microsoft, PayPal and Red Hat. Gold members include GoDaddy, NodeSource and Modulus/Progress Software, and Silver members include Apigee, Codefresh, DigitalOcean, Fidelity, Groupon, nearForm, npm, Sauce Labs, SAP, and YLD!. Get involved here: [http://nodejs.org](http://nodejs.org). + +The Node.js Foundation is a Collaborative Project at The Linux Foundation. Linux Foundation Collaborative Projects are independently funded software projects that harness the power of collaborative development to fuel innovation across industries and ecosystems. [https://nodejs.org/en/foundation/](https://nodejs.org/en/foundation/) diff --git a/locale/uk/blog/community/building-nodejs-together.md b/locale/uk/blog/community/building-nodejs-together.md new file mode 100755 index 0000000000000..b529069745f80 --- /dev/null +++ b/locale/uk/blog/community/building-nodejs-together.md @@ -0,0 +1,188 @@ +--- +title: Building Node.js Together +author: tjfontaine +date: 2014-07-29T21:00:00.000Z +status: publish +category: Community +slug: building-nodejs-together +layout: blog-post.hbs +--- + +Node.js is reaching more people than ever, it's attracting new and interesting +use cases, at the same time as seeing heavy adoption from traditional +engineering departments. Managing the project to make sure it continues to +satisfy the needs of its end users requires a higher level of precision and +diligence. It requires taking the time to communicate and reach out to new and +old parties alike. It means seeking out new and dedicated resources. It means +properly scoping a change in concert with end users, and documenting and +regularly check pointing your progress. These are just some of the ways we're +working to improve our process, and work to deliver higher quality software +that meets our goals. + +## Documentation + +One of the big things we've wanted to do is to change the way the website +works, which is something I've [mentioned +before](http://blog.nodejs.org/2014/01/16/nodejs-road-ahead/). It should be a +living breathing website whose content is created by our end users and team. +The website should be the canonical location for documentation on how to use +Node.js, how Node.js works, and how to find out what's going on in the Node +community. We have seeded the initial documentation with [how to +contribute](https://nodejs.org/en/get-involved/contribute/), [who the core team +is](https://nodejs.org/en/about/organization/#index_md_technical_steering_committee), +and some basic documentation of the [project +itself](https://nodejs.org/en/about/organization). From there we're looking to +enable the community to come in and build out the rest of the framework for +documentation. + +One of the key changes here is that we're extending the tools that generate API +documentation to work for the website in general. That means the website is now +written in markdown. Contributions work with the same +[pull-request](https://nodejs.org/en/get-involved/contribute/#code-contributions) +way as contributions to Node itself. The intent here is to be able to quickly +generate new documentation and improve it with feedback from the community. + +The website should also be where we host information about where the project is +going and the features we're currently working on (more about that later). But +it's crucial we communicate to our end users what improvements will be coming, +and the reasons we've made those decisions. That way it's clear what is coming +in what release, and also can inspire you to collaborate on the design of that +API. This is not a replacement for our issue tracking, but an enhancement that +can allow us to reach more people. + +## Features + +Which brings us to the conversation about features. During the Q & A portions +of the [Node.js on the +Road](http://blog.nodejs.org/2014/06/11/notes-from-the-road/) events there are +often questions about what does and doesn't go into core. How the team +identifies what those features are and when you decide to integrate them. I've +spent a lot of time talking about that but I've also +[added](https://nodejs.org/en/about/organization) it to the new documentation on +the site. + +It's pretty straight forward, but in short if Node.js needs an interface to +provide an abstraction, or if everyone in the community is using the same +interface, then those interfaces are candidates for being exposed as public +interfaces for Node. But what's important is that the addition of an API should +not be taken lightly. It is important for us to consider just how much of an +interface we can commit to, because once we add the API it's incredibly hard +for us to change or remove it. At least in a way that allows people to write +software that will continue to work. + +So new features and APIs need to come with known use cases and consumers, and +with working test suites. That information is clearly and concisely present on +the website to reach as wide of an audience as possible. Once an implementation +meets those requirements it can be integrated into the project. Then and only +then, when we have an implementation that meets the design specification and +satisfies the test suite, will we be able to integrate it. That's how we'll +scope our releases going forward, that's how we'll know when we're ready to +release a new version of Node. This will be a great change for Node, as it's a +step forward on moving to an always production ready master branch. + +## Quality Software + +And it's because Node.js is focused on quality software and a commitment to +backwards compatibility that it's important for us to seek ways to get more +information from the community about when and where we might be breaking them. +Having downstream users test their code bases with recent versions of Node.js +(even from our master branch) is an important way we derive feedback for our +changes. The sooner we can get that information, the more test coverage we can +add, the better the software we deliver is. + +Recently I had the opportunity to speak with [Dav +Glass](http://twitter.com/davglass) from [Yahoo!](http://yahoo.com), and we're +going to be finding ways to get automated test results back from some larger +test suites. The more automation we can get for downstream integration testing +the better the project can be at delivering quality software. + +If you're interested in participating in the conversation about how Node.js can +be proactively testing your software/modules when we've changed things, please +[join the conversation](http://github.com/joyent/node/issues). + +## Current release + +Before we can release v0.12, we need to ensure we're providing a high quality +release that addresses the needs of the users as well as what we've previously +committed to as going into this release. Sometimes what can seem like an +innocuous change that solves an immediate symptom, doesn't actually treat the +disease, but instead results in other symptoms that need to be treated. +Specifically in our streams API, it can be easy to subtly break people while +trying to fix another bug with good intent. + +This serves as a reminder that we need to properly scope our releases. We need +to know who the consumers are for new APIs and features. We need to make sure +those features' test cases are met. We need to make sure we're adopting APIs +that have broad appeal. And while we're able to work around some of these +things through external modules and experimenting with JavaScript APIs, that's +not a replacement for quality engineering. + +Those are the things that we could have done better before embarking on 0.12, +and now to release it we need to fix some of the underlying issues. Moving +forward I'm working with consumers of the tracing APIs to work on getting a +maintainable interface for Node that will satisfy their needs. We'll publicly +document those things, we'll reach out to other stakeholders, and we'll make +sure that as we implement that we can deliver discretely on what they need. + +That's why it's important for us to get our releases right, and diagnose and +fix root causes. We want to make sure that your first experience with 0.12 +results in your software still working. This is why we're working with large +production environments to get their feedback, and we're looking for those +environments and you to [file bugs](https://github.com/joyent/node/issues) that +you find. + +## The Team + +The great part about Node's contribution process and our fantastic community is +that we have a lot of very enthusiastic members who want to work as much as +possible on Node. Maybe they want to contribute because they have free time, +maybe they want to contribute to make their job easier, or perhaps they want to +contribute because their company wants them to spend their time on open source. +Whatever the reason, we welcome contributions of every stripe! + +We have our core team that manages the day to day of Node, and that works +mostly by people wanting to maintain subsystems. They alone are not solely +responsible for the entirety of that subsystem, but they guide its progress by +communicating with end users, reviewing bugs and pull requests, and identifying +test cases and consumers of new features. People come and go from the core +team, and recently we've added [some +documentation](https://nodejs.org/en/about/organization) that describes how you +find your way onto that team. It's based largely around our contribution +process. It's not about who you work for, or about who you know, it's about +your ability to provide technical improvement to the project itself. + +For instance, Chris Dickinson was recently hired to work full time on Node.js, +and has expressed interest in working on the current and future state of +streams. But it's not who employs Chris that makes him an ideal candidate, but +it will be the quality of his contributions, and his understanding of the ethos +of Node.js. That's how we find members of the team. And Chris gets that, in +[his blog](http://neversaw.us/2014/05/08/on-joining-walmart-labs/) about +working full time on Node.js he says (and I couldn't have said it better +myself): + +> I will not automatically get commit access to these repositories — like any +other community member, I will have to continually submit work of consistent +quality and put in the time to earn the commit bit. The existing core team will +have final say on whether or not I get the commit bit — which is as it should +be! + +Exactly. And not only does he understand how mechanism works, but he's [already +started](http://neversaw.us/2014/07/13/june-recap/) getting feedback from +consumers of streams and documenting some of his plans. + +In addition to Chris being hired to work full time on Node.js, Joyent has +recently hired [Julien Gilli](https://github.com/misterdjules) to work full +time with me on Node. I'm really excited for all of the team to be seeking out +new contributors, and getting to know Chris and Julien. They're both fantastic +and highly motivated, and I want to do my best to enable them to be successful +and join the team. But that's not all, I've been talking to other companies who +are excited to participate in this model, and in fact +[Modulus.io](http://modulus.io) themselves are looking to find someone this +year to work full time on Node.js. + +Node.js is bigger than the core team, it's bigger than our community, and we +are excited to continue to get new contributors, and to enable everyone. So +while we're working on the project we can't just focus on one area, but instead +consider the connected system as a whole. How we scale Node, how we scale the +team, how we scale your contributions, and how we integrate your feedback -- +this is what we have to consider while taking this project forward, together. diff --git a/locale/uk/blog/community/foundation-benefits-all.md b/locale/uk/blog/community/foundation-benefits-all.md new file mode 100755 index 0000000000000..a9d20b7da975f --- /dev/null +++ b/locale/uk/blog/community/foundation-benefits-all.md @@ -0,0 +1,90 @@ +--- +title: The Node.js Foundation benefits all +author: Scott Hammond +date: 2015-05-15T22:50:46.000Z +status: publish +category: Community +slug: the-nodejs-foundation-benefits-all +layout: blog-post.hbs +--- + +When I joined Joyent last summer I quickly realized that, despite the huge +success of Node.js in the market and the tireless work of many here at Joyent, +there were challenges in the project that we needed to address. Through +discussions with various project contributors, Node.js users, ecosystem +vendors and the [Node.js Advisory Board](http://nodeadvisoryboard.com), it +became clear that the best way to address the concerns of all key stakeholders +(and the best thing for Node.js as a whole) was to establish the Foundation as +a path for the future. + +The biggest and most obvious challenge we sought to address with the +Foundation was the friction that existed amongst some developers in the +Node.js community. Historically, leadership ran the project fairly tightly, +with a small core of developers working in a BDFL model. It was difficult for +new people to join the project, and there wasn’t enough transparency for such +a diverse, passionate community to have a sense of ownership. Consequently, a +group of developers who wanted to operate under a more open governance model +created the io.js fork. That team has done a great job innovating on +governance and engagement models, and the Node.js Foundation’s models will be +based on those policies to ensure broader community engagement in the future +of Node.js. We welcome community review and feedback on [the draft governance +documents](https://github.com/joyent/nodejs-advisory-board/tree/master/governance-proposal). + +With the recent vote by the io.js TC to join the Node.js Foundation, we took a +giant leap toward rebuilding a unified community. @mikeal, @piscisaureus and +others have done an excellent job evangelizing the value of the Foundation, +and it’s great to see it have such positive impact this early in its +formation. + +Reunification of the Node.js developer community remains an important goal of +the Foundation. But to have a successful project, we must also maintain focus +on addressing the concerns of Node.js users and the ecosystem of vendors. If +we succeed, Node.js will continue its meteoric rise as the defacto server side +javascript platform, and everyone wins. If we get it wrong, we jeapordize the +momentum and critical mass that's driven that growth, and everyone loses. + +In the user community, enterprise adoption of Node.js has skyrocketed with an +abundance of success stories. But behind every successful project is someone +who is betting their career on the choice to build with Node.js. Their primary +“ask” is to de-risk the project. They want stable, production-grade code that +will handle their technical requirements and an LTS that matches what they get +from other software. The Foundation will get that right. Donations to the +Foundation will provide the resources we need to broaden and automate the +necessary test suites and expand coverage across a large set of platforms. We +are working now on codifying the LTS policy (comments welcome +[here](https://github.com/nodejs/dev-policy/issues/67)) and will establish the +right 6-9 month release cadence with rigor on backward compatibility and EOL +horizon. + +Users also want the project to be insulated from the direction of any single +company or individual. Putting the project into a foundation insulates it from +the commercial aspirations of Joyent or any other single company. It also +facilitates the creation of the vibrant vendor ecosystem around Node.js that +users want. Users want to see relevant innovation from a strong group of +contributors and vendors. + +The vendors themselves have a clear set of requirements that can best be +addressed by the Foundation. They want a level playing field and they want to +know they can monetize the contributions they make to the project. We need a +vibrant ecosystem to complete the solution for the users of Node.js and drive +additional value and innovation around the core project. The ecosystem is the +force multiplier of value for every piece of technology and Node.js is no +exception. + +Finally, in addition to risk mitigation, transparency, neutrality and an open +governance model, the Foundation will provide needed resources. Over the past +few years Joyent and other members of the community have invested thousands of +hours and millions of dollars into the project, and much has been +accomplished. Going forward, Joyent will continue to invest aggressively in +the success and growth of Node.js. But now, with the support of new Foundation +members, we will be able to do even more. Investments from new members can be +used to expand coverage of testing harnesses, establish API compatibility +tests and certifications, extend coverage for additional platforms, underwrite +travel expenses for technical meetups for core contributors, build training +programs for users and developers, expand community development efforts, fund +full-time developers and more. + +I’m convinced the Foundation is the best vehicle for balancing the needs of +Node.js users, vendors and contributors. The project has a brilliant future +ahead of it and I am more optimistic than ever that we can work together as +one strong community to secure that future. diff --git a/locale/uk/blog/community/index.md b/locale/uk/blog/community/index.md new file mode 100755 index 0000000000000..29d5ac3dfd5b7 --- /dev/null +++ b/locale/uk/blog/community/index.md @@ -0,0 +1,6 @@ +--- +title: Community +layout: category-index.hbs +listing: true +robots: noindex, follow +--- diff --git a/locale/uk/blog/community/individual-membership.md b/locale/uk/blog/community/individual-membership.md new file mode 100755 index 0000000000000..9a7443a371207 --- /dev/null +++ b/locale/uk/blog/community/individual-membership.md @@ -0,0 +1,49 @@ +--- +title: Node.js Foundation Individual Membership Now Open +date: 2015-11-04T12:00:00.000Z +status: publish +category: Community +slug: individual-membership-nodejs-foundation +layout: blog-post.hbs +author: mikeal +--- + +The Node.js Foundation is a member-supported organization. To date we've added over 20 corporate members who provide the financial support necessary for the Foundation to thrive. + +With the support of the Linux Foundation we are now able to launch an Individual Membership program. These members will be electing two representatives to the Board of Directors this January who will be +responsible for representing the diverse needs of the Node.js community in the administration of the Node.js Foundation. + +## How do I become a member? + +Membership costs [$100 a year, or $25 for students](https://identity.linuxfoundation.org/pid/99). +Contributors to the Node.js project, including all Working Groups and sub-projects, are eligible for free membership. + +You are required to have a GitHub account to register. + +## Who can run for the board of directors? + +Any registered member. + +Keep in mind that every meeting of the Board must reach quorum in order to pass resolutions, so only people who can make themselves available on a recurring and consistent basis should consider running. + +## What does the Board of Directors do? + +The Board meets every month to approve resolutions and discuss Node.js Foundation administrative matters. This includes legal considerations, budgeting and approving Foundation-led conferences and other initiatives. Technical governance is overseen by the TSC, not the Board of Directors. + +The current board members are listed [here](../../../foundation/board). + +## What are the term lengths? + +The standard term length for those elected by the individual membership is 2 years, with an election each year to select a new representative for a new term. + +However, in the first election two representatives will be elected; the representative with the most votes will be elected for the standard 2 year term and the runner-up will serve a special 1-year term so that in 2017 we can elect a single new director for a 2 year staggered term. + +## When is the election? + +* Nominations are being solicited until January 15th. +* A ballot will be distributed on January 20th. +* The election will be completed by January 30th. + +## How do I run in the 2016 election? + +After you've registered as a member follow the instructions [here](https://github.com/nodejs/membership/issues/12). diff --git a/locale/uk/blog/community/next-chapter.md b/locale/uk/blog/community/next-chapter.md new file mode 100755 index 0000000000000..d69957597278b --- /dev/null +++ b/locale/uk/blog/community/next-chapter.md @@ -0,0 +1,103 @@ +--- +title: Next Chapter +author: tjfontaine +date: 2015-05-08T19:00:00.000Z +status: publish +category: Community +slug: next-chapter +layout: blog-post.hbs +--- + +Open source projects are about the software, the users, and the community. Since +becoming project lead in 2014, I've been privileged to be a part of the most +passionate, diverse, and vibrant community in the ecosystem. The community is +responsible for Node.js' meteoric rise and continued adoption by users and +companies all over the world. Given the strength of its community, I'm confident +that Node.js is heading in the right direction. With that said, it's time for me +to step back. + +For the past year, I've worked directly with community members to improve +Node.js, focusing on improving the parts of the project that benefit everyone. +We wanted to know what in Node.js was working for them and what wasn't. During +the life of a project, it's crucial to constantly reset yourself and not lose +sight of your identity. Node.js is a small set of stable core modules, doing one +thing, and one thing well. Every change we make, we tried to make sure we were +being true to ourselves and not violating our ethos. We've focused on +eliminating bugs and critical performance issues, as well as improving our +workflows. Ultimately, our goal was to ensure Node.js was on the right path. + +The formation of the Node.js Foundation couldn't have happened at a better time +in the life of Node.js. I believe this will be the tipping point that cements +Node's place in technology. Soon, the foundation will be announcing its first +meeting, initial membership, and future plans for Node.js. The project is on the +right path, has the right contributors and is not tied to one person. It has a +vibrant and loyal community supporting it. + +I want to take some time to highlight a few of those who have made an impact on +Node.js. This list only scratches the surface, but these are a few of the unsung +contributors that deserve some attention: + +Node.js wanted to have a [living breathing +site](https://github.com/joyent/node-website), one that could attract our +community and be the canonical source of documentation and tutorials for +Node.js. Leading the charge has been [Robert +Kowalski](https://github.com/robertkowalski) and [Wyatt +Preul](https://github.com/geek), who have been incredibly helpful to the Node.js +ecosystem in many ways, but most notably by helping breathe life in the website. + +One key point of the maturity for Node.js has been its growing predominance +worldwide. Therefore, we've been working to improve our support for +internationalization and localization. Node.js is so widely accepted that our +users need Node.js to support internationalization so they can better support +their own customers. Luckily, we have [Steven Loomis](https://github.com/srl295) +leading the charge on this — he has the unique privilege of being a member of +both ICU and Node.js. + +Node.js is seeing adoption across many new platforms, which means we need to +collaborate with the community to support those platforms. Much like we have +[Alexis Campilla](https://github.com/orangemocha) working to support the Windows +platform, we have people like [Michael Dawson](https://github.com/mhdawson) +working on adding support for PowerPC and zSeries. Additionally, he's been able +to leverage the technical depth of IBM to help squash bugs and do work on our VM +backend of V8. + +OpenSSL has had its share of issues recently, but it's not the only dependency +that can be sensitive to upgrade -- so many thanks go to [James +Snell](https://github.com/jasnell) for working to help simplify and manage those +upgrades. James has also been working together with our large, diverse, and +complex community to make sure our development policies are easy to understand +and approachable for other new contributors. + +Finally, I want to make a very special mention of [Julien +Gilli](https://github.com/misterdjules), who has been an incredible addition to +the team. Julien has been responsible for the last few releases of Node.js — +both the v0.10 and v0.12 branches. He's done wonders for the project, mostly +behind the scenes, as he has spent tons of time working on shoring up our CI +environment and the tests we run. Thanks to him, we were able to ship v0.12.0 +with all our tests passing and on all of our supported platforms. This was the +first Node.js release ever to have that feature. He has also been working +tirelessly to iterate on the process by which the team manages Node.js. Case in +point is the excellent +[documentation](https://nodejs.org/documentation/workflow/) he's put together +describing how to manage the workflow of developing and contributing to the +project. + +In short, hiring Julien to work full time on Node.js has been one of the best +things for the project. His care and concern for Node.js, its users, and their +combined future is evident in all of his actions. Node.js is incredibly lucky to +have him at its core and I am truly indebted to him. + +It's because of this strong team, community, and the formation of the Foundation +that it makes it the right time for me to step back. The foundation is here, the +software is stable, and the contributors pushing it forward are people I have a +lot of faith in. I can't wait to see just how far Node.js' star will rise. I am +excited to see how the contributors grow, shape and deliver on the promise of +Node.js, for themselves and for our users. + +Moving forward, I will still remain involved with Node.js and will provide as +much help and support to the rest of the core team as they need. However, I +won't have the time to participate at the level needed to remain a core +contributor. With the core team and the community working together, I know they +won't miss a step. + + diff --git a/locale/uk/blog/community/node-leaders-building-open-neutral-foundation.md b/locale/uk/blog/community/node-leaders-building-open-neutral-foundation.md new file mode 100755 index 0000000000000..fb27e9f3dcc51 --- /dev/null +++ b/locale/uk/blog/community/node-leaders-building-open-neutral-foundation.md @@ -0,0 +1,126 @@ +--- +title: Node.js and io.js leaders are building an open, neutral Node.js Foundation to support the future of the platform +author: Mike Dolan +date: 2015-05-15T23:50:46.000Z +status: publish +category: Community +slug: node-leaders-are-building-an-open-foundation +layout: blog-post.hbs +--- + +Just a couple months ago a variety of members of the Node.js and io.js +community announced they would discuss establishing a neutral foundation for +the community. The Linux Foundation has since been helping guide discussions +with contributors, developers, users and leaders in these communities, +increasingly expanding the scope of discussion to more stakeholders. Node.js +and io.js have a long, complex history and the facilitated discussions have +brought together key leaders to focus on what the future might mean for these +technologies. + +A lot of progress has been made in just a few short months, and we're +entering the final stages of discussions and decisions that will guide the +projects forward. Most recently [the io.js TC voted to join in the +Foundation](https://github.com/nodejs/node/issues/1705) effort and planning is +already underway to begin the process of converging the codebases. The neutral +organization, or foundation, will be a key element of that work and has been +discussed at length by those involved. When a technology and community reach a +level of maturity and adoption that outgrows one company or project, a +foundation becomes a critical enabler for ongoing growth. + +Foundations can be used to support industrial-scale open source projects that +require a legal entity to hold assets or conduct business (hiring, internship +programs, compliance, licensing trademarks, marketing and event services, +fundraising, etc). Ultimately foundations enable communities to participate in +large scale collaboration under agreed upon terms that no one company, person +or entity can change or dictate. + +It's important to note that while critical, an open governance model does not +guarantee success or growth. The io.js project has a strong developer +community, for example, but to grow further needs a model to enable funding +and investments in the project. If you haven't already, please [take a look +at Mikeal Rogers blog post](https://medium.com/node-js-javascript/growing-up-27d6cc8b7c53). +The Node.js community has needed an avenue for other companies +to participate as equals in a neutral field. rowing a community and widening +the adoption of a technology all takes resources and a governance model that +serves everyone involved. A foundation becomes the place where participants +can meet, agree on paths forward, ensure a neutral playing field in the +community and invest resources to grow the community even more. It can also +allow for broad community engagement through liberal contribution policies, +community self organization and working groups. + +At The Linux Foundation, we've helped set up neutral organizations that +support a variety of open source projects and communities through open and +neutral governance and believe the future is bright for the Node.js and io.js +communities. The technology being created has incredible value and expanding +use cases,which is why getting the governance model and defining the role of +the Foundation to support the developer community is the number one priority. + +While I'm a relative "newbie" to both the Node.js and io.js communities, I've +been able to identify with our team at Linux Foundation a number of +opportunities, as well as very common challenges in both communities that +relate to other projects we've helped before. What we've found is the +challenges the Node.js and io.js communities have are not unique; many open +source projects struggle with the same challenges and many have been +successful. As I've [previously written on +Linux.com](https://www.linux.com/news/featured-blogs/205-mike-dolan/763051-five-key-features-of-a-project-designed-for-open-collaboration), +there are five key features that we see in successful open governance: + +1. open participation +2. open, transparent technical decision making +3. open design and architecture +4. an open source license +5. an open, level playing field for intellectual property. + +I think these same features apply to the case for a foundation in the Node.js +and io.js communities. The io.js project has certainly been founded on many of +these principles and has taken off in terms of growing its developer +community. Many in the io.js community joined because they felt these +principles were not present elsewhere. For all of these reasons, we leveraged +the governance provisions from io.js to [draft proposals for the technical +community governance](https://github.com/joyent/nodejs-advisory-board/tree/master/governance-proposal). + +Now I'd like to share specific next steps for establishing the Node.js +Foundation (all of this is of course subject to change based on input from the +communities). We've started with a core group that offered advice on how to +address key governance issues. We've expanded the circle to the technical +committees of both communities and are now taking the discussion to the +entirety of both communities. + +1. Draft technical governance documents are [up for review and +comment](https://github.com/joyent/nodejs-advisory-board/tree/master/governance-proposal). + +2. The Foundation Bylaws and Membership Agreements based on our LF templates are +available for companies to sign up as members. There is no need to sign any +agreements as a community developer. If your company is interested in +participating, [now is the time to sign +up](http://f.cl.ly/items/0N1m3x0I3S2L203M1h1r/nodejs-foundation-membership-agreement-2015-march-04.pdf). + +3. Hold elections for the foundation's Gold and Silver member Board Directors and +the Technical Steering Committee elects a TSC Chair. The process typically +entails 1 week of nominations, 3-5 days of voting and then announcing the +election winners. + +4. Set up an initial Board meeting, likely mid-June. The first Board meeting will +put in place all of the key legal documents, policies, operations, etc that +are being discussed (the reason for wrapping up edits on May 8). + +5. Initiate TSC meetings under the new foundation by upon resolution of both +technical committees. The TSC will meet regularly on open, recorded calls. +Details will be posted on a foundation wiki or page. The combined io.js and +Node.js TCs have been meeting roughly every other week to work through the +[Convergence planning](https://github.com/jasnell/dev-policy/blob/6601ca1cd2886f336ac65ddb3f67d3e741a021c9/convergence.md). + +6. May 25 - June 5: Announce the new foundation, members, initial Board Directors +(elections may be pending), TSC members and any reconciliation plans agreed to +by the TSC (if ready). + +And so I ask both communities to review the ideas being proposed, including +how best to align goals, align resources and establish a platform for growing +adoption of an amazing technology the development community working to build. +I would like to thank the people building this future. Some you know; others +you do not. It takes a lot of personal strength to voice opinions and stand up +for new ideas in large communities. I appreciate the candor of the discussions +but also ask you to seek out those putting forth ideas to understand them and +to question them in a constructive dialogue. This community has another decade +or more ahead of it; now is the time to set the right foundational elements to +move forward. diff --git a/locale/uk/blog/community/node-v5.md b/locale/uk/blog/community/node-v5.md new file mode 100755 index 0000000000000..25df2f57ed11c --- /dev/null +++ b/locale/uk/blog/community/node-v5.md @@ -0,0 +1,54 @@ +--- +title: What You Should Know about Node.js v5 and More +date: 2015-10-30T12:00:00.000Z +status: publish +category: Community +slug: node-v5 +layout: blog-post.hbs +--- + +## There’s Something New with Node.js Releases + +We just released [Node.js v5.0.0](https://nodejs.org/en/blog/release/v5.0.0/). You might be thinking to yourself: These folks just released [Node.js v4.2.1](https://nodejs.org/en/blog/release/v4.2.1/) “Argon,” under the new Long Term Support (LTS) plan, now I need to download this? The answer is yes and no. + +Node.js is growing, and growing fast. As we continue to innovate quickly, we will focus on two different release lines. One release line will fall under our **LTS** plan. All release lines that have LTS support will be even numbers, and (most importantly) focus on stability and security. These release lines are for organizations with complex environments that find it cumbersome to continually upgrade. We recently released the first in this line: [Node.js v4.2.1](https://nodejs.org/en/blog/release/v4.2.1/) “Argon.” + +The other release line is called **Current**. All release lines will be odd numbers, and have a shorter lifespan and more frequent updates to the code. The Current release line will focus on active development of necessary features and refinement of existing APIs. Node.js version 5 is this type of release. + +We want to make sure that you are adopting the release that best meets your Node.js needs, so to break it down: + +Stay on or upgrade to Node.js v4.2.x if you need stability and have a complex production environment, e.g. you are a medium or large enterprise. + +Upgrade to Node.js v5.x if you have the ability to upgrade versions quickly and easily without disturbing your environment. + +Now that you have the very basics, let’s take a deeper look at the new features and characteristics of v5, and the benefits and details of our LTS plan. + +## Introduction to Node.js v5 + +[Node.js v5](https://nodejs.org/en/blog/release/v5.0.0/) is an intermediate feature release line that is best suited for users who have an easier time upgrading their Node.js installations, such as developers using the technology for front-end toolchains. This version will be supported for a maximum of only eight months and will be continually updated with new features and better performance; it is not supported under our LTS plan. + +The release cadence for v5.x will be more rapid than in the past. Expect a new release once every one to two weeks for v5.x. If upgrading is a challenge for you, we suggest you do not use this release. There will be significant ongoing development. The focus is on getting the releases to users as soon as possible. + +npm has been upgraded to v3 in Node.js v5.0.0, which (amongst other changes) will install dependencies as flat as possible in node_modules. v5.0.0 also comes with V8 4.6, which ships the [new.target](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/new.target) and [spread operator](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Spread_operator) JavaScript language features. If you want to learn more about other technical details around this, please check out our [release post](https://nodejs.org/en/blog/release/v5.0.0/). + +It’s another top-quality release from us, and we are averaging roughly 50 unique contributors per month to the codebase. We are extremely excited with all the enthusiasm and amazing work that is going into this Node.js v5 and future releases. + +## What Is Long Term Support and Why Does It Matter to Me? + +First and foremost, if you haven’t read the [Essential Steps: Long Term Support (LTS) for Node.js by Rod Vagg](https://medium.com/@nodesource/essential-steps-long-term-support-for-node-js-8ecf7514dbd#.hi7hosy92), Technical Steering Committee Chairperson at the Node.js Foundation and the Chief Node Officer at NodeSource, do so. It’s a very helpful source for understanding our release cycle process. If you only have two minutes now, here is a quick summary: + +* The point of establishing an LTS plan for Node.js is to build on top of an existing stable release cycle by delivering new versions on a predictable schedule that have a clearly defined extended support lifecycle. It is an essential requirement for enterprise application development and operations teams. It also affects companies that provide professional support for Node.js. + +* As stated above, the first LTS release line is v4 “Argon," beginning at v4.2.0 and currently standing at v4.2.1. The next LTS release line will begin in 12 months around the first week of October 2016. All LTS release lines will begin at the same time each year. + +* All LTS release lines are assigned a “codename” drawn from the names of the elements on the Periodic Table. + +* The LTS release line will be actively maintained for a period of 18 months from the date the LTS release line begins. After 18 months have passed, it will transition into Maintenance mode. + +* There will be no more than two active LTS release lines at any given time. Overlap is intended to help ease migration planning. + +* Once a Current release line becomes LTS, no new features or breaking changes will be added to that release. Changes are limited to bug fixes for stability, security updates, possible npm updates, documentation updates and certain performance improvements that can be demonstrated to not break existing applications. + +## Questions? + +If you have any questions you can always connect with us on our [help](https://github.com/nodejs/help) repository. If you encounter an issue log or bug with Node.js v5, please report to our main code repository [here](https://github.com/nodejs/node/issues). diff --git a/locale/uk/blog/community/transitions.md b/locale/uk/blog/community/transitions.md new file mode 100755 index 0000000000000..a6803725bc2f3 --- /dev/null +++ b/locale/uk/blog/community/transitions.md @@ -0,0 +1,41 @@ +--- +title: Transitions +author: Scott Hammond +date: 2015-05-08T18:00:00.000Z +status: publish +category: Community +slug: transitions +layout: blog-post.hbs +--- + +In February, we announced the [Node.js +Foundation](https://www.joyent.com/blog/introducing-the-nodejs-foundation), +which will steward Node.js moving forward and open its future up to the +community in a fashion that has not been available before. Organizations like +IBM, SAP, Apigee, F5, Fidelity, Microsoft, PayPal, Red Hat, and others are +sponsoring the Foundation, and they’re adding more contributors to the project. +The mission of the Foundation is to accelerate the adoption of Node and ensure +that the project is driven by the community under a transparent, open governance +model. + +Under the aegis of the Foundation, the Node.js project is entering the next +phase of maturity and adopting a model in which there is no BD or project lead. +Instead, the technical direction of the project will be established by a +technical steering committee run with an open governance model. There has been a +lot of discussion on the dev policies and [governance +model](https://github.com/joyent/nodejs-advisory-board/tree/master/governance-proposal) +on Github. As we move toward the Foundation model, the core team on Node.js is +already adopting some of these policies [as shown +here](https://github.com/joyent/node-website/pull/111). + +As we open a new chapter with the Foundation, we also close a remarkable chapter +in Node.js, as TJ Fontaine will be stepping back from his post as Node.js +Project Lead. TJ has come to be an integral member of our team, and his +contributions will have long-lasting effects on the future of Node.js. Although +he will not be as active, TJ will continue to act as a resource for helping the +Node.js project as needed. + +I would like to thank TJ for his time and contributions to Node.js and to +Joyent. I have witnessed firsthand the kind of impact he can have on a team, and +his technical chops will be missed. As we take this next major step in the +growth of Node.js, we wish TJ luck in his future endeavors. diff --git a/locale/uk/blog/feature/index.md b/locale/uk/blog/feature/index.md new file mode 100755 index 0000000000000..12d3702b0946e --- /dev/null +++ b/locale/uk/blog/feature/index.md @@ -0,0 +1,6 @@ +--- +title: Features +layout: category-index.hbs +listing: true +robots: noindex, follow +--- diff --git a/locale/uk/blog/feature/streams2.md b/locale/uk/blog/feature/streams2.md new file mode 100755 index 0000000000000..b5cc6daf198a0 --- /dev/null +++ b/locale/uk/blog/feature/streams2.md @@ -0,0 +1,854 @@ +--- +title: A New Streaming API for Node v0.10 +author: Isaac Z. Schlueter +date: 2012-12-21T00:45:13.000Z +slug: streams2 +category: feature +layout: blog-post.hbs +--- + +**tl;dr** + +* Node streams are great, except for all the ways in which they're + terrible. +* A new Stream implementation is coming in 0.10, that has gotten the + nickname "streams2". +* Readable streams have a `read()` method that returns a buffer or + null. (More documentation included below.) +* `'data'` events, `pause()`, and `resume()` will still work as before + (except that they'll actully work how you'd expect). +* Old programs will **almost always** work without modification, but + streams start out in a paused state, and need to be read from to be + consumed. +* **WARNING**: If you never add a `'data'` event handler, or call + `resume()`, then it'll sit in a paused state forever and never + emit `'end'`. + +------- + +Throughout the life of Node, we've been gradually iterating on the +ideal event-based API for handling data. Over time, this developed +into the "Stream" interface that you see throughout Node's core +modules and many of the modules in npm. + +Consistent interfaces increase the portability and reliability of our +programs and libraries. Overall, the move from domain-specific events +and methods towards a unified stream interface was a huge win. +However, there are still several problems with Node's streams as of +v0.8. In a nutshell: + +1. The `pause()` method doesn't pause. It is advisory-only. In + Node's implementation, this makes things much simpler, but it's + confusing to users, and doesn't do what it looks like it does. +2. `'data'` events come right away (whether you're ready or not). + This makes it unreasonably difficult to do common tasks like load a + user's session before deciding how to handle their request. +3. There is no way to consume a specific number of bytes, and then + leave the rest for some other part of the program to deal with. +4. It's unreasonably difficult to implement streams and get all the + intricacies of pause, resume, write-buffering, and data events + correct. The lack of shared classes mean that we all have to solve + the same problems repeatedly, making similar mistakes and similar + bugs. + +Common simple tasks should be easy, or we aren't doing our job. +People often say that Node is better than most other platforms at this +stuff, but in my opinion, that is less of a compliment and more of an +indictment of the current state of software. Being better than the +next guy isn't enough; we have to be the best imaginable. While they +were a big step in the right direction, the Streams in Node up until +now leave a lot wanting. + +So, just fix it, right? + +Well, we are sitting on the results of several years of explosive +growth in the Node community, so any changes have to be made very +carefully. If we break all the Node programs in 0.10, then no one +will ever want to upgrade to 0.10, and it's all pointless. We had +this conversation around 0.4, then again around 0.6, then again around +0.8. Every time, the conclusion has been "Too much work, too hard to +make backwards-compatible", and we always had more pressing problems +to solve. + +In 0.10, we cannot put it off any longer. We've bitten the bullet and +are making a significant change to the Stream implementation. You may +have seen conversations on twitter or IRC or the mailing list about +"streams2". I also gave [a talk in +November](https://dl.dropbox.com/u/3685/presentations/streams2/streams2-ko.pdf) +about this subject. A lot of node module authors have been involved +with the development of streams2 (and of course the node core team). + +## streams2 + +The feature is described pretty thoroughly in the documentation, so +I'm including it below. Please read it, especially the section on +"compatibility". There's a caveat there that is unfortunately +unavoidable, but hopefully enough of an edge case that it's easily +worked around. + +The first preview release with this change will be 0.9.4. I highly +recommend trying this release and providing feedback before it lands +in a stable version. + +As of writing this post, there are some known performance regressions, +especially in the http module. We are fanatical about maintaining +performance in Node.js, so of course this will have to be fixed before +the v0.10 stable release. (Watch for a future blog post on the tools +and techniques that have been useful in tracking down these issues.) + +There may be minor changes as necessary to fix bugs and improve +performance, but the API at this point should be considered feature +complete. It correctly does all the things we need it to do, it just +doesn't do them quite well enough yet. As always, be wary of running +unstable releases in production, of course, but I encourage you to try +it out and see what you think. Especially, if you have tests that you +can run on your modules and libraries, that would be extremely useful +feedback. + +-------- + +# Stream + + Stability: 2 - Unstable + +A stream is an abstract interface implemented by various objects in +Node. For example a request to an HTTP server is a stream, as is +stdout. Streams are readable, writable, or both. All streams are +instances of [EventEmitter][] + +You can load the Stream base classes by doing `require('stream')`. +There are base classes provided for Readable streams, Writable +streams, Duplex streams, and Transform streams. + +## Compatibility + +In earlier versions of Node, the Readable stream interface was +simpler, but also less powerful and less useful. + +* Rather than waiting for you to call the `read()` method, `'data'` + events would start emitting immediately. If you needed to do some + I/O to decide how to handle data, then you had to store the chunks + in some kind of buffer so that they would not be lost. +* The `pause()` method was advisory, rather than guaranteed. This + meant that you still had to be prepared to receive `'data'` events + even when the stream was in a paused state. + +In Node v0.10, the Readable class described below was added. For +backwards compatibility with older Node programs, Readable streams +switch into "old mode" when a `'data'` event handler is added, or when +the `pause()` or `resume()` methods are called. The effect is that, +even if you are not using the new `read()` method and `'readable'` +event, you no longer have to worry about losing `'data'` chunks. + +Most programs will continue to function normally. However, this +introduces an edge case in the following conditions: + +* No `'data'` event handler is added. +* The `pause()` and `resume()` methods are never called. + +For example, consider the following code: + +```javascript +// WARNING! BROKEN! +net.createServer(function(socket) { + + // we add an 'end' method, but never consume the data + socket.on('end', function() { + // It will never get here. + socket.end('I got your message (but didnt read it)\n'); + }); + +}).listen(1337); +``` + +In versions of node prior to v0.10, the incoming message data would be +simply discarded. However, in Node v0.10 and beyond, the socket will +remain paused forever. + +The workaround in this situation is to call the `resume()` method to +trigger "old mode" behavior: + +```javascript +// Workaround +net.createServer(function(socket) { + + socket.on('end', function() { + socket.end('I got your message (but didnt read it)\n'); + }); + + // start the flow of data, discarding it. + socket.resume(); + +}).listen(1337); +``` + +In addition to new Readable streams switching into old-mode, pre-v0.10 +style streams can be wrapped in a Readable class using the `wrap()` +method. + +## Class: stream.Readable + + + +A `Readable Stream` has the following methods, members, and events. + +Note that `stream.Readable` is an abstract class designed to be +extended with an underlying implementation of the `_read(size)` +method. (See below.) + +### new stream.Readable([options]) + +* `options` {Object} + * `highWaterMark` {Number} The maximum number of bytes to store in + the internal buffer before ceasing to read from the underlying + resource. Default=16kb + * `encoding` {String} If specified, then buffers will be decoded to + strings using the specified encoding. Default=null + * `objectMode` {Boolean} Whether this stream should behave + as a stream of objects. Meaning that stream.read(n) returns + a single value instead of a Buffer of size n + +In classes that extend the Readable class, make sure to call the +constructor so that the buffering settings can be properly +initialized. + +### readable.\_read(size) + +* `size` {Number} Number of bytes to read asynchronously + +Note: **This function should NOT be called directly.** It should be +implemented by child classes, and called by the internal Readable +class methods only. + +All Readable stream implementations must provide a `_read` method +to fetch data from the underlying resource. + +This method is prefixed with an underscore because it is internal to +the class that defines it, and should not be called directly by user +programs. However, you **are** expected to override this method in +your own extension classes. + +When data is available, put it into the read queue by calling +`readable.push(chunk)`. If `push` returns false, then you should stop +reading. When `_read` is called again, you should start pushing more +data. + +The `size` argument is advisory. Implementations where a "read" is a +single call that returns data can use this to know how much data to +fetch. Implementations where that is not relevant, such as TCP or +TLS, may ignore this argument, and simply provide data whenever it +becomes available. There is no need, for example to "wait" until +`size` bytes are available before calling `stream.push(chunk)`. + +### readable.push(chunk) + +* `chunk` {Buffer | null | String} Chunk of data to push into the read queue +* return {Boolean} Whether or not more pushes should be performed + +Note: **This function should be called by Readable implementors, NOT +by consumers of Readable subclasses.** The `_read()` function will not +be called again until at least one `push(chunk)` call is made. If no +data is available, then you MAY call `push('')` (an empty string) to +allow a future `_read` call, without adding any data to the queue. + +The `Readable` class works by putting data into a read queue to be +pulled out later by calling the `read()` method when the `'readable'` +event fires. + +The `push()` method will explicitly insert some data into the read +queue. If it is called with `null` then it will signal the end of the +data. + +In some cases, you may be wrapping a lower-level source which has some +sort of pause/resume mechanism, and a data callback. In those cases, +you could wrap the low-level source object by doing something like +this: + +```javascript +// source is an object with readStop() and readStart() methods, +// and an `ondata` member that gets called when it has data, and +// an `onend` member that gets called when the data is over. + +var stream = new Readable(); + +source.ondata = function(chunk) { + // if push() returns false, then we need to stop reading from source + if (!stream.push(chunk)) + source.readStop(); +}; + +source.onend = function() { + stream.push(null); +}; + +// _read will be called when the stream wants to pull more data in +// the advisory size argument is ignored in this case. +stream._read = function(n) { + source.readStart(); +}; +``` + +### readable.unshift(chunk) + +* `chunk` {Buffer | null | String} Chunk of data to unshift onto the read queue +* return {Boolean} Whether or not more pushes should be performed + +This is the corollary of `readable.push(chunk)`. Rather than putting +the data at the *end* of the read queue, it puts it at the *front* of +the read queue. + +This is useful in certain use-cases where a stream is being consumed +by a parser, which needs to "un-consume" some data that it has +optimistically pulled out of the source. + +```javascript +// A parser for a simple data protocol. +// The "header" is a JSON object, followed by 2 \n characters, and +// then a message body. +// +// Note: This can be done more simply as a Transform stream. See below. + +function SimpleProtocol(source, options) { + if (!(this instanceof SimpleProtocol)) + return new SimpleProtocol(options); + + Readable.call(this, options); + this._inBody = false; + this._sawFirstCr = false; + + // source is a readable stream, such as a socket or file + this._source = source; + + var self = this; + source.on('end', function() { + self.push(null); + }); + + // give it a kick whenever the source is readable + // read(0) will not consume any bytes + source.on('readable', function() { + self.read(0); + }); + + this._rawHeader = []; + this.header = null; +} + +SimpleProtocol.prototype = Object.create( + Readable.prototype, { constructor: { value: SimpleProtocol }}); + +SimpleProtocol.prototype._read = function(n) { + if (!this._inBody) { + var chunk = this._source.read(); + + // if the source doesn't have data, we don't have data yet. + if (chunk === null) + return this.push(''); + + // check if the chunk has a \n\n + var split = -1; + for (var i = 0; i < chunk.length; i++) { + if (chunk[i] === 10) { // '\n' + if (this._sawFirstCr) { + split = i; + break; + } else { + this._sawFirstCr = true; + } + } else { + this._sawFirstCr = false; + } + } + + if (split === -1) { + // still waiting for the \n\n + // stash the chunk, and try again. + this._rawHeader.push(chunk); + this.push(''); + } else { + this._inBody = true; + var h = chunk.slice(0, split); + this._rawHeader.push(h); + var header = Buffer.concat(this._rawHeader).toString(); + try { + this.header = JSON.parse(header); + } catch (er) { + this.emit('error', new Error('invalid simple protocol data')); + return; + } + // now, because we got some extra data, unshift the rest + // back into the read queue so that our consumer will see it. + var b = chunk.slice(split); + this.unshift(b); + + // and let them know that we are done parsing the header. + this.emit('header', this.header); + } + } else { + // from there on, just provide the data to our consumer. + // careful not to push(null), since that would indicate EOF. + var chunk = this._source.read(); + if (chunk) this.push(chunk); + } +}; + +// Usage: +var parser = new SimpleProtocol(source); +// Now parser is a readable stream that will emit 'header' +// with the parsed header data. +``` + +### readable.wrap(stream) + +* `stream` {Stream} An "old style" readable stream + +If you are using an older Node library that emits `'data'` events and +has a `pause()` method that is advisory only, then you can use the +`wrap()` method to create a Readable stream that uses the old stream +as its data source. + +For example: + +```javascript +var OldReader = require('./old-api-module.js').OldReader; +var oreader = new OldReader; +var Readable = require('stream').Readable; +var myReader = new Readable().wrap(oreader); + +myReader.on('readable', function() { + myReader.read(); // etc. +}); +``` + +### Event: 'readable' + +When there is data ready to be consumed, this event will fire. + +When this event emits, call the `read()` method to consume the data. + +### Event: 'end' + +Emitted when the stream has received an EOF (FIN in TCP terminology). +Indicates that no more `'data'` events will happen. If the stream is +also writable, it may be possible to continue writing. + +### Event: 'data' + +The `'data'` event emits either a `Buffer` (by default) or a string if +`setEncoding()` was used. + +Note that adding a `'data'` event listener will switch the Readable +stream into "old mode", where data is emitted as soon as it is +available, rather than waiting for you to call `read()` to consume it. + +### Event: 'error' + +Emitted if there was an error receiving data. + +### Event: 'close' + +Emitted when the underlying resource (for example, the backing file +descriptor) has been closed. Not all streams will emit this. + +### readable.setEncoding(encoding) + +Makes the `'data'` event emit a string instead of a `Buffer`. `encoding` +can be `'utf8'`, `'utf16le'` (`'ucs2'`), `'ascii'`, or `'hex'`. + +The encoding can also be set by specifying an `encoding` field to the +constructor. + +### readable.read([size]) + +* `size` {Number | null} Optional number of bytes to read. +* Return: {Buffer | String | null} + +Note: **This function SHOULD be called by Readable stream users.** + +Call this method to consume data once the `'readable'` event is +emitted. + +The `size` argument will set a minimum number of bytes that you are +interested in. If not set, then the entire content of the internal +buffer is returned. + +If there is no data to consume, or if there are fewer bytes in the +internal buffer than the `size` argument, then `null` is returned, and +a future `'readable'` event will be emitted when more is available. + +Calling `stream.read(0)` will always return `null`, and will trigger a +refresh of the internal buffer, but otherwise be a no-op. + +### readable.pipe(destination, [options]) + +* `destination` {Writable Stream} +* `options` {Object} Optional + * `end` {Boolean} Default=true + +Connects this readable stream to `destination` WriteStream. Incoming +data on this stream gets written to `destination`. Properly manages +back-pressure so that a slow destination will not be overwhelmed by a +fast readable stream. + +This function returns the `destination` stream. + +For example, emulating the Unix `cat` command: + + process.stdin.pipe(process.stdout); + +By default `end()` is called on the destination when the source stream +emits `end`, so that `destination` is no longer writable. Pass `{ end: +false }` as `options` to keep the destination stream open. + +This keeps `writer` open so that "Goodbye" can be written at the +end. + + reader.pipe(writer, { end: false }); + reader.on("end", function() { + writer.end("Goodbye\n"); + }); + +Note that `process.stderr` and `process.stdout` are never closed until +the process exits, regardless of the specified options. + +### readable.unpipe([destination]) + +* `destination` {Writable Stream} Optional + +Undo a previously established `pipe()`. If no destination is +provided, then all previously established pipes are removed. + +### readable.pause() + +Switches the readable stream into "old mode", where data is emitted +using a `'data'` event rather than being buffered for consumption via +the `read()` method. + +Ceases the flow of data. No `'data'` events are emitted while the +stream is in a paused state. + +### readable.resume() + +Switches the readable stream into "old mode", where data is emitted +using a `'data'` event rather than being buffered for consumption via +the `read()` method. + +Resumes the incoming `'data'` events after a `pause()`. + + +## Class: stream.Writable + + + +A `Writable` Stream has the following methods, members, and events. + +Note that `stream.Writable` is an abstract class designed to be +extended with an underlying implementation of the +`_write(chunk, encoding, cb)` method. (See below.) + +### new stream.Writable([options]) + +* `options` {Object} + * `highWaterMark` {Number} Buffer level when `write()` starts + returning false. Default=16kb + * `decodeStrings` {Boolean} Whether or not to decode strings into + Buffers before passing them to `_write()`. Default=true + +In classes that extend the Writable class, make sure to call the +constructor so that the buffering settings can be properly +initialized. + +### writable.\_write(chunk, encoding, callback) + +* `chunk` {Buffer | String} The chunk to be written. Will always + be a buffer unless the `decodeStrings` option was set to `false`. +* `encoding` {String} If the chunk is a string, then this is the + encoding type. Ignore chunk is a buffer. Note that chunk will + **always** be a buffer unless the `decodeStrings` option is + explicitly set to `false`. +* `callback` {Function} Call this function (optionally with an error + argument) when you are done processing the supplied chunk. + +All Writable stream implementations must provide a `_write` method to +send data to the underlying resource. + +Note: **This function MUST NOT be called directly.** It should be +implemented by child classes, and called by the internal Writable +class methods only. + +Call the callback using the standard `callback(error)` pattern to +signal that the write completed successfully or with an error. + +If the `decodeStrings` flag is set in the constructor options, then +`chunk` may be a string rather than a Buffer, and `encoding` will +indicate the sort of string that it is. This is to support +implementations that have an optimized handling for certain string +data encodings. If you do not explicitly set the `decodeStrings` +option to `false`, then you can safely ignore the `encoding` argument, +and assume that `chunk` will always be a Buffer. + +This method is prefixed with an underscore because it is internal to +the class that defines it, and should not be called directly by user +programs. However, you **are** expected to override this method in +your own extension classes. + + +### writable.write(chunk, [encoding], [callback]) + +* `chunk` {Buffer | String} Data to be written +* `encoding` {String} Optional. If `chunk` is a string, then encoding + defaults to `'utf8'` +* `callback` {Function} Optional. Called when this chunk is + successfully written. +* Returns {Boolean} + +Writes `chunk` to the stream. Returns `true` if the data has been +flushed to the underlying resource. Returns `false` to indicate that +the buffer is full, and the data will be sent out in the future. The +`'drain'` event will indicate when the buffer is empty again. + +The specifics of when `write()` will return false, is determined by +the `highWaterMark` option provided to the constructor. + +### writable.end([chunk], [encoding], [callback]) + +* `chunk` {Buffer | String} Optional final data to be written +* `encoding` {String} Optional. If `chunk` is a string, then encoding + defaults to `'utf8'` +* `callback` {Function} Optional. Called when the final chunk is + successfully written. + +Call this method to signal the end of the data being written to the +stream. + +### Event: 'drain' + +Emitted when the stream's write queue empties and it's safe to write +without buffering again. Listen for it when `stream.write()` returns +`false`. + +### Event: 'close' + +Emitted when the underlying resource (for example, the backing file +descriptor) has been closed. Not all streams will emit this. + +### Event: 'finish' + +When `end()` is called and there are no more chunks to write, this +event is emitted. + +### Event: 'pipe' + +* `source` {Readable Stream} + +Emitted when the stream is passed to a readable stream's pipe method. + +### Event 'unpipe' + +* `source` {Readable Stream} + +Emitted when a previously established `pipe()` is removed using the +source Readable stream's `unpipe()` method. + +## Class: stream.Duplex + + + +A "duplex" stream is one that is both Readable and Writable, such as a +TCP socket connection. + +Note that `stream.Duplex` is an abstract class designed to be +extended with an underlying implementation of the `_read(size)` +and `_write(chunk, encoding, callback)` methods as you would with a Readable or +Writable stream class. + +Since JavaScript doesn't have multiple prototypal inheritance, this +class prototypally inherits from Readable, and then parasitically from +Writable. It is thus up to the user to implement both the lowlevel +`_read(n)` method as well as the lowlevel `_write(chunk, encoding, cb)` method +on extension duplex classes. + +### new stream.Duplex(options) + +* `options` {Object} Passed to both Writable and Readable + constructors. Also has the following fields: + * `allowHalfOpen` {Boolean} Default=true. If set to `false`, then + the stream will automatically end the readable side when the + writable side ends and vice versa. + +In classes that extend the Duplex class, make sure to call the +constructor so that the buffering settings can be properly +initialized. + +## Class: stream.Transform + +A "transform" stream is a duplex stream where the output is causally +connected in some way to the input, such as a zlib stream or a crypto +stream. + +There is no requirement that the output be the same size as the input, +the same number of chunks, or arrive at the same time. For example, a +Hash stream will only ever have a single chunk of output which is +provided when the input is ended. A zlib stream will either produce +much smaller or much larger than its input. + +Rather than implement the `_read()` and `_write()` methods, Transform +classes must implement the `_transform()` method, and may optionally +also implement the `_flush()` method. (See below.) + +### new stream.Transform([options]) + +* `options` {Object} Passed to both Writable and Readable + constructors. + +In classes that extend the Transform class, make sure to call the +constructor so that the buffering settings can be properly +initialized. + +### transform.\_transform(chunk, encoding, callback) + +* `chunk` {Buffer | String} The chunk to be transformed. Will always + be a buffer unless the `decodeStrings` option was set to `false`. +* `encoding` {String} If the chunk is a string, then this is the + encoding type. (Ignore if `decodeStrings` chunk is a buffer.) +* `callback` {Function} Call this function (optionally with an error + argument) when you are done processing the supplied chunk. + +Note: **This function MUST NOT be called directly.** It should be +implemented by child classes, and called by the internal Transform +class methods only. + +All Transform stream implementations must provide a `_transform` +method to accept input and produce output. + +`_transform` should do whatever has to be done in this specific +Transform class, to handle the bytes being written, and pass them off +to the readable portion of the interface. Do asynchronous I/O, +process things, and so on. + +Call `transform.push(outputChunk)` 0 or more times to generate output +from this input chunk, depending on how much data you want to output +as a result of this chunk. + +Call the callback function only when the current chunk is completely +consumed. Note that there may or may not be output as a result of any +particular input chunk. + +This method is prefixed with an underscore because it is internal to +the class that defines it, and should not be called directly by user +programs. However, you **are** expected to override this method in +your own extension classes. + +### transform.\_flush(callback) + +* `callback` {Function} Call this function (optionally with an error + argument) when you are done flushing any remaining data. + +Note: **This function MUST NOT be called directly.** It MAY be implemented +by child classes, and if so, will be called by the internal Transform +class methods only. + +In some cases, your transform operation may need to emit a bit more +data at the end of the stream. For example, a `Zlib` compression +stream will store up some internal state so that it can optimally +compress the output. At the end, however, it needs to do the best it +can with what is left, so that the data will be complete. + +In those cases, you can implement a `_flush` method, which will be +called at the very end, after all the written data is consumed, but +before emitting `end` to signal the end of the readable side. Just +like with `_transform`, call `transform.push(chunk)` zero or more +times, as appropriate, and call `callback` when the flush operation is +complete. + +This method is prefixed with an underscore because it is internal to +the class that defines it, and should not be called directly by user +programs. However, you **are** expected to override this method in +your own extension classes. + +### Example: `SimpleProtocol` parser + +The example above of a simple protocol parser can be implemented much +more simply by using the higher level `Transform` stream class. + +In this example, rather than providing the input as an argument, it +would be piped into the parser, which is a more idiomatic Node stream +approach. + +```javascript +function SimpleProtocol(options) { + if (!(this instanceof SimpleProtocol)) + return new SimpleProtocol(options); + + Transform.call(this, options); + this._inBody = false; + this._sawFirstCr = false; + this._rawHeader = []; + this.header = null; +} + +SimpleProtocol.prototype = Object.create( + Transform.prototype, { constructor: { value: SimpleProtocol }}); + +SimpleProtocol.prototype._transform = function(chunk, encoding, done) { + if (!this._inBody) { + // check if the chunk has a \n\n + var split = -1; + for (var i = 0; i < chunk.length; i++) { + if (chunk[i] === 10) { // '\n' + if (this._sawFirstCr) { + split = i; + break; + } else { + this._sawFirstCr = true; + } + } else { + this._sawFirstCr = false; + } + } + + if (split === -1) { + // still waiting for the \n\n + // stash the chunk, and try again. + this._rawHeader.push(chunk); + } else { + this._inBody = true; + var h = chunk.slice(0, split); + this._rawHeader.push(h); + var header = Buffer.concat(this._rawHeader).toString(); + try { + this.header = JSON.parse(header); + } catch (er) { + this.emit('error', new Error('invalid simple protocol data')); + return; + } + // and let them know that we are done parsing the header. + this.emit('header', this.header); + + // now, because we got some extra data, emit this first. + this.push(b); + } + } else { + // from there on, just provide the data to our consumer as-is. + this.push(b); + } + done(); +}; + +var parser = new SimpleProtocol(); +source.pipe(parser) + +// Now parser is a readable stream that will emit 'header' +// with the parsed header data. +``` + + +## Class: stream.PassThrough + +This is a trivial implementation of a `Transform` stream that simply +passes the input bytes across to the output. Its purpose is mainly +for examples and testing, but there are occasionally use cases where +it can come in handy. + + +[EventEmitter]: https://nodejs.org/api/events.html#events_class_eventemitter diff --git a/locale/uk/blog/index.md b/locale/uk/blog/index.md new file mode 100755 index 0000000000000..24248de1867d0 --- /dev/null +++ b/locale/uk/blog/index.md @@ -0,0 +1,4 @@ +--- +layout: blog-index.hbs +paginate: blog +--- diff --git a/locale/uk/blog/module/index.md b/locale/uk/blog/module/index.md new file mode 100755 index 0000000000000..613857450ea6d --- /dev/null +++ b/locale/uk/blog/module/index.md @@ -0,0 +1,6 @@ +--- +title: Modules +layout: category-index.hbs +listing: true +robots: noindex, follow +--- diff --git a/locale/uk/blog/module/multi-server-continuous-deployment-with-fleet.md b/locale/uk/blog/module/multi-server-continuous-deployment-with-fleet.md new file mode 100755 index 0000000000000..5614bc10a8849 --- /dev/null +++ b/locale/uk/blog/module/multi-server-continuous-deployment-with-fleet.md @@ -0,0 +1,92 @@ +--- +title: multi-server continuous deployment with fleet +author: Isaac Schlueter +date: 2012-05-02T18:00:00.000Z +status: publish +category: module +slug: multi-server-continuous-deployment-with-fleet +layout: blog-post.hbs +--- + +

substackThis is a guest post by James "SubStack" Halliday, originally posted on his blog, and reposted here with permission.

+ +

Writing applications as a sequence of tiny services that all talk to each other over the network has many upsides, but it can be annoyingly tedious to get all the subsystems up and running.

+ +

Running a seaport can help with getting all the services to talk to each other, but running the processes is another matter, especially when you have new code to push into production.

+ +

fleet aims to make it really easy for anyone on your team to push new code from git to an armada of servers and manage all the processes in your stack.

+ +

To start using fleet, just install the fleet command with npm:

+ +
npm install -g fleet 
+ +

Then on one of your servers, start a fleet hub. From a fresh directory, give it a passphrase and a port to listen on:

+ +
fleet hub --port=7000 --secret=beepboop 
+ +

Now fleet is listening on :7000 for commands and has started a git server on :7001 over http. There's no ssh keys or post commit hooks to configure, just run that command and you're ready to go!

+ +

Next set up some worker drones to run your processes. You can have as many workers as you like on a single server but each worker should be run from a separate directory. Just do:

+ +
fleet drone --hub=x.x.x.x:7000 --secret=beepboop 
+ +

where x.x.x.x is the address where the fleet hub is running. Spin up a few of these drones.

+ +

Now navigate to the directory of the app you want to deploy. First set a remote so you don't need to type --hub and --secret all the time.

+ +
fleet remote add default --hub=x.x.x.x:7000 --secret=beepboop 
+ +

Fleet just created a fleet.json file for you to save your settings.

+ +

From the same app directory, to deploy your code just do:

+ +
fleet deploy 
+ +

The deploy command does a git push to the fleet hub's git http server and then the hub instructs all the drones to pull from it. Your code gets checked out into a new directory on all the fleet drones every time you deploy.

+ +

Because fleet is designed specifically for managing applications with lots of tiny services, the deploy command isn't tied to running any processes. Starting processes is up to the programmer but it's super simple. Just use the fleet spawn command:

+ +
fleet spawn -- node server.js 8080 
+ +

By default fleet picks a drone at random to run the process on. You can specify which drone you want to run a particular process on with the --drone switch if it matters.

+ +

Start a few processes across all your worker drones and then show what is running with the fleet ps command:

+ +
fleet ps
+drone#3dfe17b8
+├─┬ pid#1e99f4
+│ ├── status:   running
+│ ├── commit:   webapp/1b8050fcaf8f1b02b9175fcb422644cb67dc8cc5
+│ └── command:  node server.js 8888
+└─┬ pid#d7048a
+  ├── status:   running
+  ├── commit:   webapp/1b8050fcaf8f1b02b9175fcb422644cb67dc8cc5
+  └── command:  node server.js 8889
+ +

Now suppose that you have new code to push out into production. By default, fleet lets you spin up new services without disturbing your existing services. If you fleet deploy again after checking in some new changes to git, the next time you fleet spawn a new process, that process will be spun up in a completely new directory based on the git commit hash. To stop a process, just use fleet stop.

+ +

This approach lets you verify that the new services work before bringing down the old services. You can even start experimenting with heterogeneous and incremental deployment by hooking into a custom http proxy!

+ +

Even better, if you use a service registry like seaport for managing the host/port tables, you can spin up new ad-hoc staging clusters all the time without disrupting the normal operation of your site before rolling out new code to users.

+ +

Fleet has many more commands that you can learn about with its git-style manpage-based help system! Just do fleet help to get a list of all the commands you can run.

+ +
fleet help
+Usage: fleet <command> [<args>]
+
+The commands are:
+  deploy   Push code to drones.
+  drone    Connect to a hub as a worker.
+  exec     Run commands on drones.
+  hub      Create a hub for drones to connect.
+  monitor  Show service events system-wide.
+  ps       List the running processes on the drones.
+  remote   Manage the set of remote hubs.
+  spawn    Run services on drones.
+  stop     Stop processes running on drones.
+
+For help about a command, try `fleet help `.
+ +

npm install -g fleet and check out the code on github!

+ + diff --git a/locale/uk/blog/module/service-logging-in-json-with-bunyan.md b/locale/uk/blog/module/service-logging-in-json-with-bunyan.md new file mode 100755 index 0000000000000..2fd602a139f7f --- /dev/null +++ b/locale/uk/blog/module/service-logging-in-json-with-bunyan.md @@ -0,0 +1,340 @@ +--- +title: Service logging in JSON with Bunyan +author: trentmick +date: 2012-03-28T19:25:26.000Z +status: publish +category: module +slug: service-logging-in-json-with-bunyan +layout: blog-post.hbs +--- + +
+Paul Bunyan and Babe the Blue Ox
+Photo by Paul Carroll +
+ +

Service logs are gold, if you can mine them. We scan them for occasional debugging. Perhaps we grep them looking for errors or warnings, or setup an occasional nagios log regex monitor. If that. This is a waste of the best channel for data about a service.

+ +

"Log. (Huh) What is it good for. Absolutely ..."

+ + + +

These are what logs are good for. The current state of logging is barely adequate for the first of these. Doing reliable analysis, and even monitoring, of varied "printf-style" logs is a grueling or hacky task that most either don't bother with, fallback to paying someone else to do (viz. Splunk's great successes), or, for web sites, punt and use the plethora of JavaScript-based web analytics tools.

+ +

Let's log in JSON. Let's format log records with a filter outside the app. Let's put more info in log records by not shoehorning into a printf-message. Debuggability can be improved. Monitoring and analysis can definitely be improved. Let's not write another regex-based parser, and use the time we've saved writing tools to collate logs from multiple nodes and services, to query structured logs (from all services, not just web servers), etc.

+ +

At Joyent we use node.js for running many core services -- loosely coupled through HTTP REST APIs and/or AMQP. In this post I'll draw on experiences from my work on Joyent's SmartDataCenter product and observations of Joyent Cloud operations to suggest some improvements to service logging. I'll show the (open source) Bunyan logging library and tool that we're developing to improve the logging toolchain.

+ +

Current State of Log Formatting

+ +
# apache access log
+10.0.1.22 - - [15/Oct/2010:11:46:46 -0700] "GET /favicon.ico HTTP/1.1" 404 209
+fe80::6233:4bff:fe29:3173 - - [15/Oct/2010:11:46:58 -0700] "GET / HTTP/1.1" 200 44
+
+# apache error log
+[Fri Oct 15 11:46:46 2010] [error] [client 10.0.1.22] File does not exist: /Library/WebServer/Documents/favicon.ico
+[Fri Oct 15 11:46:58 2010] [error] [client fe80::6233:4bff:fe29:3173] File does not exist: /Library/WebServer/Documents/favicon.ico
+
+# Mac /var/log/secure.log
+Oct 14 09:20:56 banana loginwindow[41]: in pam_sm_authenticate(): Failed to determine Kerberos principal name.
+Oct 14 12:32:20 banana com.apple.SecurityServer[25]: UID 501 authenticated as user trentm (UID 501) for right 'system.privilege.admin'
+
+# an internal joyent agent log
+[2012-02-07 00:37:11.898] [INFO] AMQPAgent - Publishing success.
+[2012-02-07 00:37:11.910] [DEBUG] AMQPAgent - { req_id: '8afb8d99-df8e-4724-8535-3d52adaebf25',
+  timestamp: '2012-02-07T00:37:11.898Z',
+
+# typical expressjs log output
+[Mon, 21 Nov 2011 20:52:11 GMT] 200 GET /foo (1ms)
+Blah, some other unstructured output to from a console.log call.
+
+ +

What're we doing here? Five logs at random. Five different date formats. As Paul Querna points out we haven't improved log parsability in 20 years. Parsability is enemy number one. You can't use your logs until you can parse the records, and faced with the above the inevitable solution is a one-off regular expression.

+ +

The current state of the art is various parsing libs, analysis tools and homebrew scripts ranging from grep to Perl, whose scope is limited to a few niches log formats.

+ +

JSON for Logs

+ +

JSON.parse() solves all that. Let's log in JSON. But it means a change in thinking: The first-level audience for log files shouldn't be a person, but a machine.

+ +

That is not said lightly. The "Unix Way" of small focused tools lightly coupled with text output is important. JSON is less "text-y" than, e.g., Apache common log format. JSON makes grep and awk awkward. Using less directly on a log is handy.

+ +

But not handy enough. That 80's pastel jumpsuit awkwardness you're feeling isn't the JSON, it's your tools. Time to find a json tool -- json is one, bunyan described below is another one. Time to learn your JSON library instead of your regex library: JavaScript, Python, Ruby, Java, Perl.

+ +

Time to burn your log4j Layout classes and move formatting to the tools side. Creating a log message with semantic information and throwing that away to make a string is silly. The win at being able to trivially parse log records is huge. The possibilities at being able to add ad hoc structured information to individual log records is interesting: think program state metrics, think feeding to Splunk, or loggly, think easy audit logs.

+ +

Introducing Bunyan

+ +

Bunyan is a node.js module for logging in JSON and a bunyan CLI tool to view those logs.

+ +

Logging with Bunyan basically looks like this:

+ +
$ cat hi.js
+var Logger = require('bunyan');
+var log = new Logger({name: 'hello' /*, ... */});
+log.info("hi %s", "paul");
+
+ +

And you'll get a log record like this:

+ +
$ node hi.js
+{"name":"hello","hostname":"banana.local","pid":40026,"level":30,"msg":"hi paul","time":"2012-03-28T17:25:37.050Z","v":0}
+
+ +

Pipe that through the bunyan tool that is part of the "node-bunyan" install to get more readable output:

+ +
$ node hi.js | ./node_modules/.bin/bunyan       # formatted text output
+[2012-02-07T18:50:18.003Z]  INFO: hello/40026 on banana.local: hi paul
+
+$ node hi.js | ./node_modules/.bin/bunyan -j    # indented JSON output
+{
+  "name": "hello",
+  "hostname": "banana.local",
+  "pid": 40087,
+  "level": 30,
+  "msg": "hi paul",
+  "time": "2012-03-28T17:26:38.431Z",
+  "v": 0
+}
+
+ +

Bunyan is log4j-like: create a Logger with a name, call log.info(...), etc. However it has no intention of reproducing much of the functionality of log4j. IMO, much of that is overkill for the types of services you'll tend to be writing with node.js.

+ +

Longer Bunyan Example

+ +

Let's walk through a bigger example to show some interesting things in Bunyan. We'll create a very small "Hello API" server using the excellent restify library -- which we used heavily here at Joyent. (Bunyan doesn't require restify at all, you can easily use Bunyan with Express or whatever.)

+ +

You can follow along in https://github.com/trentm/hello-json-logging if you like. Note that I'm using the current HEAD of the bunyan and restify trees here, so details might change a bit. Prerequisite: a node 0.6.x installation.

+ +
git clone https://github.com/trentm/hello-json-logging.git
+cd hello-json-logging
+make
+
+ +

Bunyan Logger

+ +

Our server first creates a Bunyan logger:

+ +
var Logger = require('bunyan');
+var log = new Logger({
+  name: 'helloapi',
+  streams: [
+    {
+      stream: process.stdout,
+      level: 'debug'
+    },
+    {
+      path: 'hello.log',
+      level: 'trace'
+    }
+  ],
+  serializers: {
+    req: Logger.stdSerializers.req,
+    res: restify.bunyan.serializers.response,
+  },
+});
+
+ +

Every Bunyan logger must have a name. Unlike log4j, this is not a hierarchical dotted namespace. It is just a name field for the log records.

+ +

Every Bunyan logger has one or more streams, to which log records are written. Here we've defined two: logging at DEBUG level and above is written to stdout, and logging at TRACE and above is appended to 'hello.log'.

+ +

Bunyan has the concept of serializers: a registry of functions that know how to convert a JavaScript object for a certain log record field to a nice JSON representation for logging. For example, here we register the Logger.stdSerializers.req function to convert HTTP Request objects (using the field name "req") to JSON. More on serializers later.

+ +

Restify Server

+ +

Restify 1.x and above has bunyan support baked in. You pass in your Bunyan logger like this:

+ +
var server = restify.createServer({
+  name: 'Hello API',
+  log: log   // Pass our logger to restify.
+});
+
+ +

Our simple API will have a single GET /hello?name=NAME endpoint:

+ +
server.get({path: '/hello', name: 'SayHello'}, function(req, res, next) {
+  var caller = req.params.name || 'caller';
+  req.log.debug('caller is "%s"', caller);
+  res.send({"hello": caller});
+  return next();
+});
+
+ +

If we run that, node server.js, and call the endpoint, we get the expected restify response:

+ +
$ curl -iSs http://0.0.0.0:8080/hello?name=paul
+HTTP/1.1 200 OK
+Access-Control-Allow-Origin: *
+Access-Control-Allow-Headers: Accept, Accept-Version, Content-Length, Content-MD5, Content-Type, Date, X-Api-Version
+Access-Control-Expose-Headers: X-Api-Version, X-Request-Id, X-Response-Time
+Server: Hello API
+X-Request-Id: f6aaf942-c60d-4c72-8ddd-bada459db5e3
+Access-Control-Allow-Methods: GET
+Connection: close
+Content-Length: 16
+Content-MD5: Xmn3QcFXaIaKw9RPUARGBA==
+Content-Type: application/json
+Date: Tue, 07 Feb 2012 19:12:35 GMT
+X-Response-Time: 4
+
+{"hello":"paul"}
+
+ +

Setup Server Logging

+ +

Let's add two things to our server. First, we'll use the server.pre to hook into restify's request handling before routing where we'll log the request.

+ +
server.pre(function (request, response, next) {
+  request.log.info({req: request}, 'start');        // (1)
+  return next();
+});
+
+ +

This is the first time we've seen this log.info style with an object as the first argument. Bunyan logging methods (log.trace, log.debug, ...) all support an optional first object argument with extra log record fields:

+ +
log.info(<object> fields, <string> msg, ...)
+
+ +

Here we pass in the restify Request object, req. The "req" serializer we registered above will come into play here, but bear with me.

+ +

Remember that we already had this debug log statement in our endpoint handler:

+ +
req.log.debug('caller is "%s"', caller);            // (2)
+
+ +

Second, use the restify server after event to log the response:

+ +
server.on('after', function (req, res, route) {
+  req.log.info({res: res}, "finished");             // (3)
+});
+
+ +

Log Output

+ +

Now lets see what log output we get when somebody hits our API's endpoint:

+ +
$ curl -iSs http://0.0.0.0:8080/hello?name=paul
+HTTP/1.1 200 OK
+...
+X-Request-Id: 9496dfdd-4ec7-4b59-aae7-3fed57aed5ba
+...
+
+{"hello":"paul"}
+
+ +

Here is the server log:

+ +
[trentm@banana:~/tm/hello-json-logging]$ node server.js
+... intro "listening at" log message elided ...
+{"name":"helloapi","hostname":"banana.local","pid":40341,"level":30,"req":{"method":"GET","url":"/hello?name=paul","headers":{"user-agent":"curl/7.19.7 (universal-apple-darwin10.0) libcurl/7.19.7 OpenSSL/0.9.8r zlib/1.2.3","host":"0.0.0.0:8080","accept":"*/*"},"remoteAddress":"127.0.0.1","remotePort":59831},"msg":"start","time":"2012-03-28T17:37:29.506Z","v":0}
+{"name":"helloapi","hostname":"banana.local","pid":40341,"route":"SayHello","req_id":"9496dfdd-4ec7-4b59-aae7-3fed57aed5ba","level":20,"msg":"caller is \"paul\"","time":"2012-03-28T17:37:29.507Z","v":0}
+{"name":"helloapi","hostname":"banana.local","pid":40341,"route":"SayHello","req_id":"9496dfdd-4ec7-4b59-aae7-3fed57aed5ba","level":30,"res":{"statusCode":200,"headers":{"access-control-allow-origin":"*","access-control-allow-headers":"Accept, Accept-Version, Content-Length, Content-MD5, Content-Type, Date, X-Api-Version","access-control-expose-headers":"X-Api-Version, X-Request-Id, X-Response-Time","server":"Hello API","x-request-id":"9496dfdd-4ec7-4b59-aae7-3fed57aed5ba","access-control-allow-methods":"GET","connection":"close","content-length":16,"content-md5":"Xmn3QcFXaIaKw9RPUARGBA==","content-type":"application/json","date":"Wed, 28 Mar 2012 17:37:29 GMT","x-response-time":3}},"msg":"finished","time":"2012-03-28T17:37:29.510Z","v":0}
+
+ +

Lets look at each in turn to see what is interesting -- pretty-printed with node server.js | ./node_modules/.bin/bunyan -j:

+ +
{                                                   // (1)
+  "name": "helloapi",
+  "hostname": "banana.local",
+  "pid": 40442,
+  "level": 30,
+  "req": {
+    "method": "GET",
+    "url": "/hello?name=paul",
+    "headers": {
+      "user-agent": "curl/7.19.7 (universal-apple-darwin10.0) libcurl/7.19.7 OpenSSL/0.9.8r zlib/1.2.3",
+      "host": "0.0.0.0:8080",
+      "accept": "*/*"
+    },
+    "remoteAddress": "127.0.0.1",
+    "remotePort": 59834
+  },
+  "msg": "start",
+  "time": "2012-03-28T17:39:44.880Z",
+  "v": 0
+}
+
+ +

Here we logged the incoming request with request.log.info({req: request}, 'start'). The use of the "req" field triggers the "req" serializer registered at Logger creation.

+ +

Next the req.log.debug in our handler:

+ +
{                                                   // (2)
+  "name": "helloapi",
+  "hostname": "banana.local",
+  "pid": 40442,
+  "route": "SayHello",
+  "req_id": "9496dfdd-4ec7-4b59-aae7-3fed57aed5ba",
+  "level": 20,
+  "msg": "caller is \"paul\"",
+  "time": "2012-03-28T17:39:44.883Z",
+  "v": 0
+}
+
+ +

and the log of response in the "after" event:

+ +
{                                                   // (3)
+  "name": "helloapi",
+  "hostname": "banana.local",
+  "pid": 40442,
+  "route": "SayHello",
+  "req_id": "9496dfdd-4ec7-4b59-aae7-3fed57aed5ba",
+  "level": 30,
+  "res": {
+    "statusCode": 200,
+    "headers": {
+      "access-control-allow-origin": "*",
+      "access-control-allow-headers": "Accept, Accept-Version, Content-Length, Content-MD5, Content-Type, Date, X-Api-Version",
+      "access-control-expose-headers": "X-Api-Version, X-Request-Id, X-Response-Time",
+      "server": "Hello API",
+      "x-request-id": "9496dfdd-4ec7-4b59-aae7-3fed57aed5ba",
+      "access-control-allow-methods": "GET",
+      "connection": "close",
+      "content-length": 16,
+      "content-md5": "Xmn3QcFXaIaKw9RPUARGBA==",
+      "content-type": "application/json",
+      "date": "Wed, 28 Mar 2012 17:39:44 GMT",
+      "x-response-time": 5
+    }
+  },
+  "msg": "finished",
+  "time": "2012-03-28T17:39:44.886Z",
+  "v": 0
+}
+
+ +

Two useful details of note here:

+ +
    +
  1. The last two log messages include a "req_id" field (added to the req.log logger by restify). Note that this is the same UUID as the "X-Request-Id" header in the curl response. This means that if you use req.log for logging in your API handlers you will get an easy way to collate all logging for particular requests.

    + +

    If your's is an SOA system with many services, a best practice is to carry that X-Request-Id/req_id through your system to enable collating handling of a single top-level request.

  2. +
  3. The last two log messages include a "route" field. This tells you to which handler restify routed the request. While possibly useful for debugging, this can be very helpful for log-based monitoring of endpoints on a server.

  4. +
+ +

Recall that we also setup all logging to go the "hello.log" file. This was set at the TRACE level. Restify will log more detail of its operation at the trace level. See my "hello.log" for an example. The bunyan tool does a decent job of nicely formatting multiline messages and "req"/"res" keys (with color, not shown in the gist).

+ +

This is logging you can use effectively.

+ +

Other Tools

+ +

Bunyan is just one of many options for logging in node.js-land. Others (that I know of) supporting JSON logging are winston and logmagic. Paul Querna has an excellent post on using JSON for logging, which shows logmagic usage and also touches on topics like the GELF logging format, log transporting, indexing and searching.

+ +

Final Thoughts

+ +

Parsing challenges won't ever completely go away, but it can for your logs if you use JSON. Collating log records across logs from multiple nodes is facilitated by a common "time" field. Correlating logging across multiple services is enabled by carrying a common "req_id" (or equivalent) through all such logs.

+ +

Separate log files for a single service is an anti-pattern. The typical Apache example of separate access and error logs is legacy, not an example to follow. A JSON log provides the structure necessary for tooling to easily filter for log records of a particular type.

+ +

JSON logs bring possibilities. Feeding to tools like Splunk becomes easy. Ad hoc fields allow for a lightly spec'd comm channel from apps to other services: records with a "metric" could feed to statsd, records with a "loggly: true" could feed to loggly.com.

+ +

Here I've described a very simple example of restify and bunyan usage for node.js-based API services with easy JSON logging. Restify provides a powerful framework for robust API services. Bunyan provides a light API for nice JSON logging and the beginnings of tooling to help consume Bunyan JSON logs.

+ +

Update (29-Mar-2012): Fix styles somewhat for RSS readers.

diff --git a/locale/uk/blog/nodejs-road-ahead.md b/locale/uk/blog/nodejs-road-ahead.md new file mode 100755 index 0000000000000..ca856d50e63f1 --- /dev/null +++ b/locale/uk/blog/nodejs-road-ahead.md @@ -0,0 +1,54 @@ +--- +title: Node.js and the Road Ahead +date: 2014-01-16T23:00:00.000Z +author: Timothy J Fontaine +slug: nodejs-road-ahead +layout: blog-post.hbs +--- +As the new project lead for Node.js I am excited for our future, and want to +give you an update on where we are. + +One of Node's major goals is to provide a small core, one that provides the +right amount of surface area for consumers to achieve and innovate, without +Node itself getting in the way. That ethos is alive and well, we're going to +continue to provide a small, simple, and stable set of APIs that facilitate the +amazing uses the community finds for Node. We're going to keep providing +backward compatible APIs, so code you write today will continue to work on +future versions of Node. And of course, performance tuning and bug fixing will +always be an important part of every release cycle. + +The release of Node v0.12 is imminent, and a lot of significant work has gone +into this release. There's streams3, a better keep alive agent for http, the vm +module is now based on contextify, and significant performance work done in +core features (Buffers, TLS, streams). We have a few APIs that are still being +ironed out before we can feature freeze and branch (execSync, AsyncListeners, +user definable instrumentation). We are definitely in the home stretch. + +But Node is far from done. In the short term there will be new releases of v8 +that we'll need to track, as well as integrating the new ABI stable C module +interface. There are interesting language features that we can use to extend +Node APIs (extend not replace). We need to write more tooling, we need to +expose more interfaces to further enable innovation. We can explore +functionality to embed Node in your existing project. + +The list can go on and on. Yet, Node is larger than the software itself. Node +is also the community, the businesses, the ecosystems, and their related +events. With that in mind there are things we can work to improve. + +The core team will be improving its procedures such that we can quickly and +efficiently communicate with you. We want to provide high quality and timely +responses to issues, describe our development roadmap, as well as provide our +progress during each release cycle. We know you're interested in our plans for +Node, and it's important we're able to provide that information. Communication +should be bidirectional: we want to continue to receive feedback about how +you're using Node, and what your pain points are. + +After the release of v0.12 we will facilitate the community to contribute and +curate content for nodejs.org. Allowing the community to continue to invest in +Node will ensure nodejs.org is an excellent starting point and the primary +resource for tutorials, documentation, and materials regarding Node. We have an +awesome and engaged community, and they're paramount to our success. + +I'm excited for Node's future, to see new and interesting use cases, and to +continue to help businesses scale and innovate with Node. We have a lot we can +accomplish together, and I look forward to seeing those results. diff --git a/locale/uk/blog/npm/2013-outage-postmortem.md b/locale/uk/blog/npm/2013-outage-postmortem.md new file mode 100755 index 0000000000000..afe9dda9ea85c --- /dev/null +++ b/locale/uk/blog/npm/2013-outage-postmortem.md @@ -0,0 +1,85 @@ +--- +date: 2013-11-26T15:14:59.000Z +author: Charlie Robbins +title: Keeping The npm Registry Awesome +slug: npm-post-mortem +category: npm +layout: blog-post.hbs +--- + +We know the availability and overall health of The npm Registry is paramount to everyone using Node.js as well as the larger JavaScript community and those of your using it for [some][browserify] [awesome][dotc] [projects][npm-rubygems] [and ideas][npm-python]. Between November 4th and November 15th 2013 The npm Registry had several hours of downtime over three distinct time periods: + +1. November 4th -- 16:30 to 15:00 UTC +2. November 13th -- 15:00 to 19:30 UTC +3. November 15th -- 15:30 to 18:00 UTC + +The root cause of these downtime was insufficient resources: both hardware and human. This is a full post-mortem where we will be look at how npmjs.org works, what went wrong, how we changed the previous architecture of The npm Registry to fix it, as well next steps we are taking to prevent this from happening again. + +All of the next steps require additional expenditure from Nodejitsu: both servers and labor. This is why along with this post-mortem we are announcing our [crowdfunding campaign: scalenpm.org](https://scalenpm.org)! Our goal is to raise enough funds so that Nodejitsu can continue to run The npm Registry as a free service for _you, the community._ + +Please take a minute now to donate at [https://scalenpm.org](https://scalenpm.org)! + +## How does npmjs.org work? + +There are two distinct components that make up npmjs.org operated by different people: + +* **http://registry.npmjs.org**: The main CouchApp (Github: [isaacs/npmjs.org](https://github.com/isaacs/npmjs.org)) that stores both package tarballs and metadata. It is operated by Nodejitsu since we [acquired IrisCouch in May](https://www.nodejitsu.com/company/press/2013/05/22/iriscouch/). The primary system administrator is [Jason Smith](https://github.com/jhs), the current CTO at Nodejitsu, cofounder of IrisCouch, and the System Administrator of registry.npmjs.org since 2011. +* **https://npmjs.com**: The npmjs website that you interact with using a web browser. It is a Node.js program (Github: [isaacs/npm-www](https://github.com/isaacs/npm-www)) maintained and operated by Isaac and running on a Joyent Public Cloud SmartMachine. + +Here is a high-level summary of the _old architecture:_ + +old npm architecture +
+ _Diagram 1. Old npm architecture_ +
+ +## What went wrong and how was it fixed? + +As illustrated above, before November 13th, 2013, npm operated as a single CouchDB server with regular daily backups. We briefly ran a multi-master CouchDB setup after downtime back in August, but after reports that `npm login` no longer worked correctly we rolled back to a single CouchDB server. On both November 13th and November 15th CouchDB became unresponsive on requests to the `/registry` database while requests to all other databases (e.g. `/public_users`) remained responsive. Although the root cause of the CouchDB failures have yet to be determined given that only requests to `/registry` were slow and/or timed out we suspect it is related to the massive number of attachments stored in the registry. + +The incident on November 4th was ultimately resolved by a reboot and resize of the host machine, but when the same symptoms reoccured less than 10 days later additional steps were taken: + +1. The [registry was moved to another machine][ops-new-machine] of equal resources to exclude the possibility of a hardware issue. +2. The [registry database itself][ops-compaction] was [compacted][compaction]. + +When neither of these yielded a solution Jason Smith and I decided to move to a multi-master architecture with continuous replication illustrated below: + +current npm architecture +
+ _Diagram 2. Current npm architecture -- Red-lines denote continuous replication_ +
+ +This _should_ have been the end of our story but unfortunately our supervision logic did not function properly to restart the secondary master on the morning of November 15th. During this time we [moved briefly][ops-single-server] back to a single master architecture. Since then the secondary master has been closely monitored by the entire Nodejitsu operations team to ensure it's continued stability. + +## What is being done to prevent future incidents? + +The public npm registry simply cannot go down. **Ever.** We gained a lot of operational knowledge about The npm Registry and about CouchDB as a result of these outages. This new knowledge has made clear several steps that we need to take to prevent future downtime: + +1. **Always be in multi-master**: The multi-master CouchDB architecture we have setup will scale to more than just two CouchDB servers. _As npm grows we'll be able to add additional capacity!_ +2. **Decouple www.npmjs.org and registry.npmjs.org**: Right now www.npmjs.org still depends directly on registry.npmjs.org. We are planning to add an additional replica to the current npm architecture so that Isaac can more easily service requests to www.npmjs.org. That means it won't go down if the registry goes down. +3. **Always have a spare replica**: We need have a hot spare replica running continuous replication from either to swap out when necessary. This is also important as we need to regularly run compaction on each master since the registry is growing ~10GB per week on disk. +4. **Move attachments out of CouchDB**: Work has begun to move the package tarballs out of CouchDB and into [Joyent's Manta service](http://www.joyent.com/products/manta). Additionally, [MaxCDN](http://www.maxcdn.com/) has generously offered to provide CDN services for npm, once the tarballs are moved out of the registry database. This will help improve delivery speed, while dramatically reducing the file system I/O load on the CouchDB servers. Work is progressing slowly, because at each stage in the plan, we are making sure that current replication users are minimally impacted. + +When these new infrastructure components are in-place The npm Registry will look like this: + +planned npm architecture +
+ _Diagram 3. Planned npm architecture -- Red-lines denote continuous replication_ +
+ +## You are npm! And we need your help! + +The npm Registry has had a 10x year. In November 2012 there were 13.5 million downloads. In October 2013 there were **114.6 million package downloads.** We're honored to have been a part of sustaining this growth for the community and we want to see it continue to grow to a billion package downloads a month and beyond. + +_**But we need your help!**_ All of these necessary improvements require more servers, more time from Nodejitsu staff and an overall increase to what we spend maintaining the public npm registry as a free service for the Node.js community. + +Please take a minute now to donate at [https://scalenpm.org](https://scalenpm.org)! + +[browserify]: http://browserify.org/ +[dotc]: https://github.com/substack/dotc +[npm-rubygems]: http://andrew.ghost.io/emulating-node-js-modules-in-ruby/ +[npm-python]: https://twitter.com/__lucas/status/391688082573258753 +[ops-new-machine]: https://twitter.com/npmjs/status/400692071377276928 +[ops-compaction]: https://twitter.com/npmjs/status/400705715846643712 +[compaction]: http://wiki.apache.org/couchdb/Compaction +[ops-single-server]: https://twitter.com/npmjs/status/401384681507016704 diff --git a/locale/uk/blog/npm/index.md b/locale/uk/blog/npm/index.md new file mode 100755 index 0000000000000..a26081ad303e5 --- /dev/null +++ b/locale/uk/blog/npm/index.md @@ -0,0 +1,6 @@ +--- +title: NPM +layout: category-index.hbs +listing: true +robots: noindex, follow +--- diff --git a/locale/uk/blog/npm/managing-node-js-dependencies-with-shrinkwrap.md b/locale/uk/blog/npm/managing-node-js-dependencies-with-shrinkwrap.md new file mode 100755 index 0000000000000..939c48bf3ab40 --- /dev/null +++ b/locale/uk/blog/npm/managing-node-js-dependencies-with-shrinkwrap.md @@ -0,0 +1,170 @@ +--- +title: Managing Node.js Dependencies with Shrinkwrap +author: Dave Pacheco +date: 2012-02-27T18:51:59.000Z +status: publish +category: npm +slug: managing-node-js-dependencies-with-shrinkwrap +layout: blog-post.hbs +--- + +


+Photo by Luc Viatour (flickr)

+ +

Managing dependencies is a fundamental problem in building complex software. The terrific success of github and npm have made code reuse especially easy in the Node world, where packages don't exist in isolation but rather as nodes in a large graph. The software is constantly changing (releasing new versions), and each package has its own constraints about what other packages it requires to run (dependencies). npm keeps track of these constraints, and authors express what kind of changes are compatible using semantic versioning, allowing authors to specify that their package will work with even future versions of its dependencies as long as the semantic versions are assigned properly. + +

+

This does mean that when you "npm install" a package with dependencies, there's no guarantee that you'll get the same set of code now that you would have gotten an hour ago, or that you would get if you were to run it again an hour later. You may get a bunch of bug fixes now that weren't available an hour ago. This is great during development, where you want to keep up with changes upstream. It's not necessarily what you want for deployment, though, where you want to validate whatever bits you're actually shipping. + +

+

Put differently, it's understood that all software changes incur some risk, and it's critical to be able to manage this risk on your own terms. Taking that risk in development is good because by definition that's when you're incorporating and testing software changes. On the other hand, if you're shipping production software, you probably don't want to take this risk when cutting a release candidate (i.e. build time) or when you actually ship (i.e. deploy time) because you want to validate whatever you ship. + +

+

You can address a simple case of this problem by only depending on specific versions of packages, allowing no semver flexibility at all, but this falls apart when you depend on packages that don't also adopt the same principle. Many of us at Joyent started wondering: can we generalize this approach? + +

+

Shrinkwrapping packages

+

That brings us to npm shrinkwrap[1]: + +

+
NAME
+       npm-shrinkwrap -- Lock down dependency versions
+
+SYNOPSIS
+       npm shrinkwrap
+
+DESCRIPTION
+       This  command  locks down the versions of a package's dependencies so
+       that you can control exactly which versions of each  dependency  will
+       be used when your package is installed.
+

Let's consider package A: + +

+
{
+    "name": "A",
+    "version": "0.1.0",
+    "dependencies": {
+        "B": "<0.1.0"
+    }
+}
+

package B: + +

+
{
+    "name": "B",
+    "version": "0.0.1",
+    "dependencies": {
+        "C": "<0.1.0"
+    }
+}
+

and package C: + +

+
{
+    "name": "C,
+    "version": "0.0.1"
+}
+

If these are the only versions of A, B, and C available in the registry, then a normal "npm install A" will install: + +

+
A@0.1.0
+└─┬ B@0.0.1
+  └── C@0.0.1
+

Then if B@0.0.2 is published, then a fresh "npm install A" will install: + +

+
A@0.1.0
+└─┬ B@0.0.2
+  └── C@0.0.1
+

assuming the new version did not modify B's dependencies. Of course, the new version of B could include a new version of C and any number of new dependencies. As we said before, if A's author doesn't want that, she could specify a dependency on B@0.0.1. But if A's author and B's author are not the same person, there's no way for A's author to say that she does not want to pull in newly published versions of C when B hasn't changed at all. + +

+

In this case, A's author can use + +

+
# npm shrinkwrap
+

This generates npm-shrinkwrap.json, which will look something like this: + +

+
{
+    "name": "A",
+    "dependencies": {
+        "B": {
+            "version": "0.0.1",
+            "dependencies": {
+                "C": {  "version": "0.1.0" }
+            }
+        }
+    }
+}
+

The shrinkwrap command has locked down the dependencies based on what's currently installed in node_modules. When "npm install" installs a package with a npm-shrinkwrap.json file in the package root, the shrinkwrap file (rather than package.json files) completely drives the installation of that package and all of its dependencies (recursively). So now the author publishes A@0.1.0, and subsequent installs of this package will use B@0.0.1 and C@0.1.0, regardless the dependencies and versions listed in A's, B's, and C's package.json files. If the authors of B and C publish new versions, they won't be used to install A because the shrinkwrap refers to older versions. Even if you generate a new shrinkwrap, it will still reference the older versions, since "npm shrinkwrap" uses what's installed locally rather than what's available in the registry. + +

+

Using shrinkwrapped packages

+

Using a shrinkwrapped package is no different than using any other package: you can "npm install" it by hand, or add a dependency to your package.json file and "npm install" it. + +

+

Building shrinkwrapped packages

+

To shrinkwrap an existing package: + +

+
    +
  1. Run "npm install" in the package root to install the current versions of all dependencies.
  2. +
  3. Validate that the package works as expected with these versions.
  4. +
  5. Run "npm shrinkwrap", add npm-shrinkwrap.json to git, and publish your package.
  6. +
+

To add or update a dependency in a shrinkwrapped package: + +

+
    +
  1. Run "npm install" in the package root to install the current versions of all dependencies.
  2. +
  3. Add or update dependencies. "npm install" each new or updated package individually and then update package.json.
  4. +
  5. Validate that the package works as expected with the new dependencies.
  6. +
  7. Run "npm shrinkwrap", commit the new npm-shrinkwrap.json, and publish your package.
  8. +
+

You can still use npm outdated(1) to view which dependencies have newer versions available. + +

+

For more details, check out the full docs on npm shrinkwrap, from which much of the above is taken. + +

+

Why not just check node_modules into git?

+

One previously proposed solution is to "npm install" your dependencies during development and commit the results into source control. Then you deploy your app from a specific git SHA knowing you've got exactly the same bits that you tested in development. This does address the problem, but it has its own issues: for one, binaries are tricky because you need to "npm install" them to get their sources, but this builds the [system-dependent] binary too. You can avoid checking in the binaries and use "npm rebuild" at build time, but we've had a lot of difficulty trying to do this.[2] At best, this is second-class treatment for binary modules, which are critical for many important types of Node applications.[3] + +

+

Besides the issues with binary modules, this approach just felt wrong to many of us. There's a reason we don't check binaries into source control, and it's not just because they're platform-dependent. (After all, we could build and check in binaries for all supported platforms and operating systems.) It's because that approach is error-prone and redundant: error-prone because it introduces a new human failure mode where someone checks in a source change but doesn't regenerate all the binaries, and redundant because the binaries can always be built from the sources alone. An important principle of software version control is that you don't check in files derived directly from other files by a simple transformation.[4] Instead, you check in the original sources and automate the transformations via the build process. + +

+

Dependencies are just like binaries in this regard: they're files derived from a simple transformation of something else that is (or could easily be) already available: the name and version of the dependency. Checking them in has all the same problems as checking in binaries: people could update package.json without updating the checked-in module (or vice versa). Besides that, adding new dependencies has to be done by hand, introducing more opportunities for error (checking in the wrong files, not checking in certain files, inadvertently changing files, and so on). Our feeling was: why check in this whole dependency tree (and create a mess for binary add-ons) when we could just check in the package name and version and have the build process do the rest? + +

+

Finally, the approach of checking in node_modules doesn't really scale for us. We've got at least a dozen repos that will use restify, and it doesn't make sense to check that in everywhere when we could instead just specify which version each one is using. There's another principle at work here, which is separation of concerns: each repo specifies what it needs, while the build process figures out where to get it. + +

+

What if an author republishes an existing version of a package?

+

We're not suggesting deploying a shrinkwrapped package directly and running "npm install" to install from shrinkwrap in production. We already have a build process to deal with binary modules and other automateable tasks. That's where we do the "npm install". We tar up the result and distribute the tarball. Since we test each build before shipping, we won't deploy something we didn't test. + +

+

It's still possible to pick up newly published versions of existing packages at build time. We assume force publish is not that common in the first place, let alone force publish that breaks compatibility. If you're worried about this, you can use git SHAs in the shrinkwrap or even consider maintaining a mirror of the part of the npm registry that you use and require human confirmation before mirroring unpublishes. + +

+

Final thoughts

+

Of course, the details of each use case matter a lot, and the world doesn't have to pick just one solution. If you like checking in node_modules, you should keep doing that. We've chosen the shrinkwrap route because that works better for us. + +

+

It's not exactly news that Joyent is heavy on Node. Node is the heart of our SmartDataCenter (SDC) product, whose public-facing web portal, public API, Cloud Analytics, provisioning, billing, heartbeating, and other services are all implemented in Node. That's why it's so important to us to have robust components (like logging and REST) and tools for understanding production failures postmortem, profile Node apps in production, and now managing Node dependencies. Again, we're interested to hear feedback from others using these tools. + +

+
+Dave Pacheco blogs at dtrace.org. + +

[1] Much of this section is taken directly from the "npm shrinkwrap" documentation. + +

+

[2] We've had a lot of trouble with checking in node_modules with binary dependencies. The first problem is figuring out exactly which files not to check in (.o, .node, .dynlib, .so, *.a, ...). When Mark went to apply this to one of our internal services, the "npm rebuild" step blew away half of the dependency tree because it ran "make clean", which in dependency ldapjs brings the repo to a clean slate by blowing away its dependencies. Later, a new (but highly experienced) engineer on our team was tasked with fixing a bug in our Node-based DHCP server. To fix the bug, we went with a new dependency. He tried checking in node_modules, which added 190,000 lines of code (to this repo that was previously a few hundred LOC). And despite doing everything he could think of to do this correctly and test it properly, the change broke the build because of the binary modules. So having tried this approach a few times now, it appears quite difficult to get right, and as I pointed out above, the lack of actual documentation and real world examples suggests others either aren't using binary modules (which we know isn't true) or haven't had much better luck with this approach. + +

+

[3] Like a good Node-based distributed system, our architecture uses lots of small HTTP servers. Each of these serves a REST API using restify. restify uses the binary module node-dtrace-provider, which gives each of our services deep DTrace-based observability for free. So literally almost all of our components are or will soon be depending on a binary add-on. Additionally, the foundation of Cloud Analytics are a pair of binary modules that extract data from DTrace and kstat. So this isn't a corner case for us, and we don't believe we're exceptional in this regard. The popular hiredis package for interfacing with redis from Node is also a binary module. + +

+

[4] Note that I said this is an important principle for software version control, not using git in general. People use git for lots of things where checking in binaries and other derived files is probably fine. Also, I'm not interested in proselytizing; if you want to do this for software version control too, go ahead. But don't do it out of ignorance of existing successful software engineering practices.

diff --git a/locale/uk/blog/npm/npm-1-0-global-vs-local-installation.md b/locale/uk/blog/npm/npm-1-0-global-vs-local-installation.md new file mode 100755 index 0000000000000..380eb5f486010 --- /dev/null +++ b/locale/uk/blog/npm/npm-1-0-global-vs-local-installation.md @@ -0,0 +1,67 @@ +--- +title: "npm 1.0: Global vs Local installation" +author: Isaac Schlueter +date: 2011-03-24T06:07:13.000Z +status: publish +category: npm +slug: npm-1-0-global-vs-local-installation +layout: blog-post.hbs +--- + +

npm 1.0 is in release candidate mode. Go get it!

+ +

More than anything else, the driving force behind the npm 1.0 rearchitecture was the desire to simplify what a package installation directory structure looks like.

+ +

In npm 0.x, there was a command called bundle that a lot of people liked. bundle let you install your dependencies locally in your project, but even still, it was basically a hack that never really worked very reliably.

+ +

Also, there was that activation/deactivation thing. That’s confusing.

+ +

Two paths

+ +

In npm 1.0, there are two ways to install things:

+ +
  1. globally —- This drops modules in {prefix}/lib/node_modules, and puts executable files in {prefix}/bin, where {prefix} is usually something like /usr/local. It also installs man pages in {prefix}/share/man, if they’re supplied.
  2. locally —- This installs your package in the current working directory. Node modules go in ./node_modules, executables go in ./node_modules/.bin/, and man pages aren’t installed at all.
+ +

Which to choose

+ +

Whether to install a package globally or locally depends on the global config, which is aliased to the -g command line switch.

+ +

Just like how global variables are kind of gross, but also necessary in some cases, global packages are important, but best avoided if not needed.

+ +

In general, the rule of thumb is:

+ +
  1. If you’re installing something that you want to use in your program, using require('whatever'), then install it locally, at the root of your project.
  2. If you’re installing something that you want to use in your shell, on the command line or something, install it globally, so that its binaries end up in your PATH environment variable.
+ +

When you can't choose

+ +

Of course, there are some cases where you want to do both. Coffee-script and Express both are good examples of apps that have a command line interface, as well as a library. In those cases, you can do one of the following:

+ +
  1. Install it in both places. Seriously, are you that short on disk space? It’s fine, really. They’re tiny JavaScript programs.
  2. Install it globally, and then npm link coffee-script or npm link express (if you’re on a platform that supports symbolic links.) Then you only need to update the global copy to update all the symlinks as well.
+ +

The first option is the best in my opinion. Simple, clear, explicit. The second is really handy if you are going to re-use the same library in a bunch of different projects. (More on npm link in a future installment.)

+ +

You can probably think of other ways to do it by messing with environment variables. But I don’t recommend those ways. Go with the grain.

+ +

Slight exception: It’s not always the cwd.

+ +

Let’s say you do something like this:

+ +
cd ~/projects/foo     # go into my project
+npm install express   # ./node_modules/express
+cd lib/utils          # move around in there
+vim some-thing.js     # edit some stuff, work work work
+npm install redis     # ./lib/utils/node_modules/redis!? ew.
+ +

In this case, npm will install redis into ~/projects/foo/node_modules/redis. Sort of like how git will work anywhere within a git repository, npm will work anywhere within a package, defined by having a node_modules folder.

+ +

Test runners and stuff

+ +

If your package's scripts.test command uses a command-line program installed by one of your dependencies, not to worry. npm makes ./node_modules/.bin the first entry in the PATH environment variable when running any lifecycle scripts, so this will work fine, even if your program is not globally installed: + +

{ "name" : "my-program"
+, "version" : "1.2.3"
+, "dependencies": { "express": "*", "coffee-script": "*" }
+, "devDependencies": { "vows": "*" }
+, "scripts":
+  { "test": "vows test/*.js"
+  , "preinstall": "cake build" } }
diff --git a/locale/uk/blog/npm/npm-1-0-link.md b/locale/uk/blog/npm/npm-1-0-link.md new file mode 100755 index 0000000000000..24a16427b21ce --- /dev/null +++ b/locale/uk/blog/npm/npm-1-0-link.md @@ -0,0 +1,117 @@ +--- +title: "npm 1.0: link" +author: Isaac Schlueter +date: 2011-04-07T00:40:33.000Z +status: publish +category: npm +slug: npm-1-0-link +layout: blog-post.hbs +--- + +

npm 1.0 is in release candidate mode. Go get it!

+ +

In npm 0.x, there was a command called link. With it, you could “link-install” a package so that changes would be reflected in real-time. This is especially handy when you’re actually building something. You could make a few changes, run the command again, and voila, your new code would be run without having to re-install every time.

+ +

Of course, compiled modules still have to be rebuilt. That’s not ideal, but it’s a problem that will take more powerful magic to solve.

+ +

In npm 0.x, this was a pretty awful kludge. Back then, every package existed in some folder like:

+ +
prefix/lib/node/.npm/my-package/1.3.6/package
+
+ +

and the package’s version and name could be inferred from the path. Then, symbolic links were set up that looked like:

+ +
prefix/lib/node/my-package@1.3.6 -> ./.npm/my-package/1.3.6/package
+
+ +

It was easy enough to point that symlink to a different location. However, since the package.json file could change, that meant that the connection between the version and the folder was not reliable.

+ +

At first, this was just sort of something that we dealt with by saying, “Relink if you change the version.” However, as more and more edge cases arose, eventually the solution was to give link packages this fakey version of “9999.0.0-LINK-hash” so that npm knew it was an impostor. Sometimes the package was treated as if it had the 9999.0.0 version, and other times it was treated as if it had the version specified in the package.json.

+ +

A better way

+ +

For npm 1.0, we backed up and looked at what the actual use cases were. Most of the time when you link something you want one of the following:

+ +
    +
  1. globally install this package I’m working on so that I can run the command it creates and test its stuff as I work on it.
  2. +
  3. locally install my thing into some other thing that depends on it, so that the other thing can require() it.
  4. +
+ +

And, in both cases, changes should be immediately apparent and not require any re-linking.

+ +

Also, there’s a third use case that I didn’t really appreciate until I started writing more programs that had more dependencies:

+ +
  1. Globally install something, and use it in development in a bunch of projects, and then update them all at once so that they all use the latest version.

+ +

Really, the second case above is a special-case of this third case.

+ + + +

The first step is to link your local project into the global install space. (See global vs local installation for more on this global/local business.)

+ +

I do this as I’m developing node projects (including npm itself).

+ +
cd ~/dev/js/node-tap  # go into the project dir
+npm link              # create symlinks into {prefix}
+
+ +

Because of how I have my computer set up, with /usr/local as my install prefix, I end up with a symlink from /usr/local/lib/node_modules/tap pointing to ~/dev/js/node-tap, and the executable linked to /usr/local/bin/tap.

+ +

Of course, if you set your paths differently, then you’ll have different results. (That’s why I tend to talk in terms of prefix rather than /usr/local.)

+ + + +

When you want to link the globally-installed package into your local development folder, you run npm link pkg where pkg is the name of the package that you want to install.

+ +

For example, let’s say that I wanted to write some tap tests for my node-glob package. I’d first do the steps above to link tap into the global install space, and then I’d do this:

+ +
cd ~/dev/js/node-glob  # go to the project that uses the thing.
+npm link tap           # link the global thing into my project.
+
+ +

Now when I make changes in ~/dev/js/node-tap, they’ll be immediately reflected in ~/dev/js/node-glob/node_modules/tap.

+ + + +

Let’s say I have 15 sites that all use express. I want the benefits of local development, but I also want to be able to update all my dev folders at once. You can globally install express, and then link it into your local development folder.

+ +
npm install express -g  # install express globally
+cd ~/dev/js/my-blog     # development folder one
+npm link express        # link the global express into ./node_modules
+cd ~/dev/js/photo-site  # other project folder
+npm link express        # link express into here, as well
+
+                        # time passes
+                        # TJ releases some new stuff.
+                        # you want this new stuff.
+
+npm update express -g   # update the global install.
+                        # this also updates my project folders.
+
+ +

Caveat: Not For Real Servers

+ +

npm link is a development tool. It’s awesome for managing packages on your local development box. But deploying with npm link is basically asking for problems, since it makes it super easy to update things without realizing it.

+ +

Caveat 2: Sorry, Windows!

+ +

I highly doubt that a native Windows node will ever have comparable symbolic link support to what Unix systems provide. I know that there are junctions and such, and I've heard legends about symbolic links on Windows 7.

+ +

When there is a native windows port of Node, if that native windows port has `fs.symlink` and `fs.readlink` support that is exactly identical to the way that they work on Unix, then this should work fine.

+ +

But I wouldn't hold my breath. Any bugs about this not working on a native Windows system (ie, not Cygwin) will most likely be closed with wontfix.

+ + +

Aside: Credit where Credit’s Due

+ +

Back before the Great Package Management Wars of Node 0.1, before npm or kiwi or mode or seed.js could do much of anything, and certainly before any of them had more than 2 users, Mikeal Rogers invited me to the Couch.io offices for lunch to talk about this npm registry thingie I’d mentioned wanting to build. (That is, to convince me to use CouchDB for it.)

+ +

Since he was volunteering to build the first version of it, and since couch is pretty much the ideal candidate for this use-case, it was an easy sell.

+ +

While I was there, he said, “Look. You need to be able to link a project directory as if it was installed as a package, and then have it all Just Work. Can you do that?”

+ +

I was like, “Well, I don’t know… I mean, there’s these edge cases, and it doesn’t really fit with the existing folder structure very well…”

+ +

“Dude. Either you do it, or I’m going to have to do it, and then there’ll be another package manager in node, instead of writing a registry for npm, and it won’t be as good anyway. Don’t be python.”

+ +

The rest is history.

diff --git a/locale/uk/blog/npm/npm-1-0-released.md b/locale/uk/blog/npm/npm-1-0-released.md new file mode 100755 index 0000000000000..abc105708d448 --- /dev/null +++ b/locale/uk/blog/npm/npm-1-0-released.md @@ -0,0 +1,39 @@ +--- +title: "npm 1.0: Released" +author: Isaac Schlueter +date: 2011-05-01T15:09:45.000Z +status: publish +category: npm +slug: npm-1-0-released +layout: blog-post.hbs +--- + +

npm 1.0 has been released. Here are the highlights:

+ + + +

The focus is on npm being a development tool, rather than an apt-wannabe.

+ +

Installing it

+ +

To get the new version, run this command:

+ +
curl https://npmjs.com/install.sh | sh 
+ +

This will prompt to ask you if it’s ok to remove all the old 0.x cruft. If you want to not be asked, then do this:

+ +
curl https://npmjs.com/install.sh | clean=yes sh 
+ +

Or, if you want to not do the cleanup, and leave the old stuff behind, then do this:

+ +
curl https://npmjs.com/install.sh | clean=no sh 
+ +

A lot of people in the node community were brave testers and helped make this release a lot better (and swifter) than it would have otherwise been. Thanks :)

+ +

Code Freeze

+ +

npm will not have any major feature enhancements or architectural changes for at least 6 months. There are interesting developments planned that leverage npm in some ways, but it’s time to let the client itself settle. Also, I want to focus attention on some other problems for a little while.

+ +

Of course, bug reports are always welcome.

+ +

See you at NodeConf!

diff --git a/locale/uk/blog/npm/npm-1-0-the-new-ls.md b/locale/uk/blog/npm/npm-1-0-the-new-ls.md new file mode 100755 index 0000000000000..b2b72067e91fa --- /dev/null +++ b/locale/uk/blog/npm/npm-1-0-the-new-ls.md @@ -0,0 +1,147 @@ +--- +title: "npm 1.0: The New 'ls'" +author: Isaac Schlueter +date: 2011-03-18T06:22:17.000Z +status: publish +category: npm +slug: npm-1-0-the-new-ls +layout: blog-post.hbs +--- + +

This is the first in a series of hopefully more than 1 posts, each detailing some aspect of npm 1.0.

+ +

In npm 0.x, the ls command was a combination of both searching the registry as well as reporting on what you have installed.

+ +

As the registry has grown in size, this has gotten unwieldy. Also, since npm 1.0 manages dependencies differently, nesting them in node_modules folder and installing locally by default, there are different things that you want to view.

+ +

The functionality of the ls command was split into two different parts. search is now the way to find things on the registry (and it only reports one line per package, instead of one line per version), and ls shows a tree view of the packages that are installed locally.

+ +

Here’s an example of the output:

+ +
$ npm ls
+npm@1.0.0 /Users/isaacs/dev-src/js/npm
+├── semver@1.0.1 
+├─┬ ronn@0.3.5 
+│ └── opts@1.2.1 
+└─┬ express@2.0.0rc3 extraneous 
+  ├─┬ connect@1.1.0 
+  │ ├── qs@0.0.7 
+  │ └── mime@1.2.1 
+  ├── mime@1.2.1 
+  └── qs@0.0.7
+
+ +

This is after I’ve done npm install semver ronn express in the npm source directory. Since express isn’t actually a dependency of npm, it shows up with that “extraneous” marker.

+ +

Let’s see what happens when we create a broken situation:

+ +
$ rm -rf ./node_modules/express/node_modules/connect
+$ npm ls
+npm@1.0.0 /Users/isaacs/dev-src/js/npm
+├── semver@1.0.1 
+├─┬ ronn@0.3.5 
+│ └── opts@1.2.1 
+└─┬ express@2.0.0rc3 extraneous 
+  ├── UNMET DEPENDENCY connect >= 1.1.0 < 2.0.0
+  ├── mime@1.2.1 
+  └── qs@0.0.7
+
+ +

Tree views are great for human readability, but some times you want to pipe that stuff to another program. For that output, I took the same datastructure, but instead of building up a treeview string for each line, it spits out just the folders like this:

+ +
$ npm ls -p
+/Users/isaacs/dev-src/js/npm
+/Users/isaacs/dev-src/js/npm/node_modules/semver
+/Users/isaacs/dev-src/js/npm/node_modules/ronn
+/Users/isaacs/dev-src/js/npm/node_modules/ronn/node_modules/opts
+/Users/isaacs/dev-src/js/npm/node_modules/express
+/Users/isaacs/dev-src/js/npm/node_modules/express/node_modules/connect
+/Users/isaacs/dev-src/js/npm/node_modules/express/node_modules/connect/node_modules/qs
+/Users/isaacs/dev-src/js/npm/node_modules/express/node_modules/connect/node_modules/mime
+/Users/isaacs/dev-src/js/npm/node_modules/express/node_modules/mime
+/Users/isaacs/dev-src/js/npm/node_modules/express/node_modules/qs
+
+ +

Since you sometimes want a bigger view, I added the --long option to (shorthand: -l) to spit out more info:

+ +
$ npm ls -l
+npm@1.0.0 
+│ /Users/isaacs/dev-src/js/npm
+│ A package manager for node
+│ git://github.com/isaacs/npm.git
+│ https://npmjs.com/
+├── semver@1.0.1 
+│   ./node_modules/semver
+│   The semantic version parser used by npm.
+│   git://github.com/isaacs/node-semver.git
+├─┬ ronn@0.3.5 
+│ │ ./node_modules/ronn
+│ │ markdown to roff and html converter
+│ └── opts@1.2.1 
+│     ./node_modules/ronn/node_modules/opts
+│     Command line argument parser written in the style of commonjs. To be used with node.js
+└─┬ express@2.0.0rc3 extraneous 
+  │ ./node_modules/express
+  │ Sinatra inspired web development framework
+  ├─┬ connect@1.1.0 
+  │ │ ./node_modules/express/node_modules/connect
+  │ │ High performance middleware framework
+  │ │ git://github.com/senchalabs/connect.git
+  │ ├── qs@0.0.7 
+  │ │   ./node_modules/express/node_modules/connect/node_modules/qs
+  │ │   querystring parser
+  │ └── mime@1.2.1 
+  │     ./node_modules/express/node_modules/connect/node_modules/mime
+  │     A comprehensive library for mime-type mapping
+  ├── mime@1.2.1 
+  │   ./node_modules/express/node_modules/mime
+  │   A comprehensive library for mime-type mapping
+  └── qs@0.0.7 
+      ./node_modules/express/node_modules/qs
+      querystring parser
+
+$ npm ls -lp
+/Users/isaacs/dev-src/js/npm:npm@1.0.0::::
+/Users/isaacs/dev-src/js/npm/node_modules/semver:semver@1.0.1::::
+/Users/isaacs/dev-src/js/npm/node_modules/ronn:ronn@0.3.5::::
+/Users/isaacs/dev-src/js/npm/node_modules/ronn/node_modules/opts:opts@1.2.1::::
+/Users/isaacs/dev-src/js/npm/node_modules/express:express@2.0.0rc3:EXTRANEOUS:::
+/Users/isaacs/dev-src/js/npm/node_modules/express/node_modules/connect:connect@1.1.0::::
+/Users/isaacs/dev-src/js/npm/node_modules/express/node_modules/connect/node_modules/qs:qs@0.0.7::::
+/Users/isaacs/dev-src/js/npm/node_modules/express/node_modules/connect/node_modules/mime:mime@1.2.1::::
+/Users/isaacs/dev-src/js/npm/node_modules/express/node_modules/mime:mime@1.2.1::::
+/Users/isaacs/dev-src/js/npm/node_modules/express/node_modules/qs:qs@0.0.7::::
+
+ +

And, if you want to get at the globally-installed modules, you can use ls with the global flag:

+ +
$ npm ls -g
+/usr/local
+├─┬ A@1.2.3 -> /Users/isaacs/dev-src/js/A
+│ ├── B@1.2.3 -> /Users/isaacs/dev-src/js/B
+│ └─┬ npm@0.3.15 
+│   └── semver@1.0.1 
+├─┬ B@1.2.3 -> /Users/isaacs/dev-src/js/B
+│ └── A@1.2.3 -> /Users/isaacs/dev-src/js/A
+├── glob@2.0.5 
+├─┬ npm@1.0.0 -> /Users/isaacs/dev-src/js/npm
+│ ├── semver@1.0.1 
+│ └─┬ ronn@0.3.5 
+│   └── opts@1.2.1 
+└── supervisor@0.1.2 -> /Users/isaacs/dev-src/js/node-supervisor
+
+$ npm ls -gpl
+/usr/local:::::
+/usr/local/lib/node_modules/A:A@1.2.3::::/Users/isaacs/dev-src/js/A
+/usr/local/lib/node_modules/A/node_modules/npm:npm@0.3.15::::/Users/isaacs/dev-src/js/A/node_modules/npm
+/usr/local/lib/node_modules/A/node_modules/npm/node_modules/semver:semver@1.0.1::::/Users/isaacs/dev-src/js/A/node_modules/npm/node_modules/semver
+/usr/local/lib/node_modules/B:B@1.2.3::::/Users/isaacs/dev-src/js/B
+/usr/local/lib/node_modules/glob:glob@2.0.5::::
+/usr/local/lib/node_modules/npm:npm@1.0.0::::/Users/isaacs/dev-src/js/npm
+/usr/local/lib/node_modules/npm/node_modules/semver:semver@1.0.1::::/Users/isaacs/dev-src/js/npm/node_modules/semver
+/usr/local/lib/node_modules/npm/node_modules/ronn:ronn@0.3.5::::/Users/isaacs/dev-src/js/npm/node_modules/ronn
+/usr/local/lib/node_modules/npm/node_modules/ronn/node_modules/opts:opts@1.2.1::::/Users/isaacs/dev-src/js/npm/node_modules/ronn/node_modules/opts
+/usr/local/lib/node_modules/supervisor:supervisor@0.1.2::::/Users/isaacs/dev-src/js/node-supervisor
+
+ +

Those -> flags are indications that the package is link-installed, which will be covered in the next installment.

diff --git a/locale/uk/blog/npm/peer-dependencies.md b/locale/uk/blog/npm/peer-dependencies.md new file mode 100755 index 0000000000000..3158b7a810abb --- /dev/null +++ b/locale/uk/blog/npm/peer-dependencies.md @@ -0,0 +1,137 @@ +--- +category: npm +title: Peer Dependencies +date: 2013-02-08T00:00:00.000Z +author: Domenic Denicola +slug: peer-dependencies +layout: blog-post.hbs +--- + +Reposted from [Domenic's +blog](http://domenic.me/2013/02/08/peer-dependencies/) with +permission. Thanks! + +npm is awesome as a package manager. In particular, it handles sub-dependencies very well: if my package depends on +`request` version 2 and `some-other-library`, but `some-other-library` depends on `request` version 1, the resulting +dependency graph looks like: + +``` +├── request@2.12.0 +└─┬ some-other-library@1.2.3 + └── request@1.9.9 +``` + +This is, generally, great: now `some-other-library` has its own copy of `request` v1 that it can use, while not +interfering with my package's v2 copy. Everyone's code works! + +## The Problem: Plugins + +There's one use case where this falls down, however: *plugins*. A plugin package is meant to be used with another "host" +package, even though it does not always directly *use* the host package. There are many examples of this pattern in the +Node.js package ecosystem already: + +- Grunt [plugins](http://gruntjs.com/#plugins-all) +- Chai [plugins](http://chaijs.com/plugins) +- LevelUP [plugins](https://github.com/rvagg/node-levelup/wiki/Modules) +- Express [middleware](http://expressjs.com/api.html#middleware) +- Winston [transports](https://github.com/flatiron/winston/blob/master/docs/transports.md) + +Even if you're not familiar with any of those use cases, surely you recall "jQuery plugins" from back when you were a +client-side developer: little `