Skip to content

fraxken/jsdoc-tokenizer

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

53 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

jsdoc-tokenizer

version Maintenance MIT dep size Known Vulnerabilities Build Status

Tokenizer (Scanner) for JSDoc. This project only operate on/with Node.js Buffer and Uint8Array (No conversion to String required).

Requirements

Getting Started

This package is available in the Node Package Repository and can be easily installed with npm or yarn.

$ npm i jsdoc-tokenizer
# or
$ yarn add jsdoc-tokenizer

Usage example

const { scan, TOKENS } = require("jsdoc-tokenizer");

const it = scan(Buffer.from("/** @type {String} **/"));
for (const [token, value] of it) {
    if (value instanceof Uint8Array) {
        console.log(token, String.fromCharCode(...value));
    }
    else {
        const tValue = typeof value === "number" ? String.fromCharCode(value) : value;
        console.log(token, tValue);
    }
}

API

scan(buf: Buffer): IterableIterator< [Symbol, Uint8Array | number] >

Scan (tokenize) JSDoc block. The scanner only take single instance of block (not build to detect start and end). To extract JSDoc block as buffer, please take a look at jsdoc-extractor.

TOKENS

Available tokens are described by the following interface:

interface Tokens {
    KEYWORD: Symbol,
    IDENTIFIER: Symbol,
    SYMBOL: Symbol
}

Tokens are exported in the module.

Caveats

  • There is a room for improvement on supporting more chars for Identifier (Some are not supported in tag).
  • Example & Description tags are closed with @ that must be preceded by *\s.

License

MIT

About

Tokenizer/Lexer (Scanner) for JSDoc.

Resources

License

Stars

Watchers

Forks

Packages

No packages published