A small generic scanner and parser written in TypeScript for Node and the web.
Note: Not production ready!
tomo is a small generic lexer and parser that started out of curiousity on how lexers and parsers work. In addition to my curiosity, a discussion appeared on Mr. Doc's issue about creating a parser that replaces Mr. Doc's current core (Dox) and tomo is the result from the discussion. If this succeeds, it will mean that Mr. Doc can generate documentation for any language*. Of course, tomo can be used for other things such as a text editor, etc.
* It will depend on the parser.
npm i --save tomo
tomo contains 3 main classes and 1 module that makes up the lexer and parser combo: Source, Scanner, Parser, and Token respectively.
The Source class initializes a new source object which provides the essential methods to the Scanner class
to begin the tokenization process. To tokenize, one must pass a callback function to
that tokenizes the source. Once it has finished the tokenization process, it wraps the tokens
into a Token stream which the TokenStream class (in the Token module) provides a few helper methods to access the tokens.
The Token class (in the Token module) provides the essentials to describe the scanned characters.
Token.ts exports two
modules which is the
TokenType: enum, and the Token class. See
The Parser class ( Help Needed ) should parse the tokens and return an AST.
As the descriptions says, tomo can be used in the web browser. The library is bundled using browserify and all classes have no external dependency (npm modules) other than the tomo classes. You may simply add the source from
dist/ into your html file and use it as you normally would. Note that the source is not minified at the moment.
See example on rawgit.
# Install modules npm i # Run the example file npm start
Documentation can be found at the GitHub Page
Contributions are gladly accepted. tomo uses Typescript and the source files
are located under the
To build the files, run