diff --git a/.github/workflows/dotnet.yml b/.github/workflows/dotnet.yml index a4a122f..619e56a 100644 --- a/.github/workflows/dotnet.yml +++ b/.github/workflows/dotnet.yml @@ -19,7 +19,7 @@ jobs: - name: Setup .NET uses: actions/setup-dotnet@v4 with: - dotnet-version: 9.x.x + dotnet-version: 10.x.x - name: Restore dependencies run: dotnet restore - name: Build diff --git a/Example/App.razor b/Example/App.razor index 8e338d7..c9c82d4 100644 --- a/Example/App.razor +++ b/Example/App.razor @@ -1,14 +1,8 @@  - + - - Not found - -

Sorry, there's nothing at this address.

-
-
\ No newline at end of file diff --git a/Example/Example.csproj b/Example/Example.csproj index b29c715..2cf3829 100644 --- a/Example/Example.csproj +++ b/Example/Example.csproj @@ -1,26 +1,28 @@  - net9.0 + net10.0 enable enable fd23fba6-03b7-4047-8686-dc3d7ef8e733 + true + - - - - - - - - - - - - + + + + + + + + + + + + diff --git a/Example/Pages/NotFound.razor b/Example/Pages/NotFound.razor new file mode 100644 index 0000000..d9a44a4 --- /dev/null +++ b/Example/Pages/NotFound.razor @@ -0,0 +1,6 @@ +@page "/not-found" +@layout MainLayout + +

Not found

+

Sorry, there's nothing at this address.

+ diff --git a/Example/wwwroot/index.html b/Example/wwwroot/index.html index 8397887..3d9d161 100644 --- a/Example/wwwroot/index.html +++ b/Example/wwwroot/index.html @@ -6,11 +6,14 @@ Example + + + @@ -21,7 +24,7 @@ Reload 🗙 - + diff --git a/PocketBase/CHANGELOG.md b/PocketBase/CHANGELOG.md index 1c6074c..0737fd8 100644 --- a/PocketBase/CHANGELOG.md +++ b/PocketBase/CHANGELOG.md @@ -1,3 +1,49 @@ +## v0.31.0 + +- Visualize presentable multiple `relation` fields ([#7260](https://github.com/pocketbase/pocketbase/issues/7260)). + +- Support Ed25519 in the optional OIDC `id_token` signature validation ([#7252](https://github.com/pocketbase/pocketbase/issues/7252); thanks @shynome). + +- Added `ApiScenario.DisableTestAppCleanup` optional field to skip the auto test app cleanup and leave it up to the developers to do the cleanup manually ([#7267](https://github.com/pocketbase/pocketbase/discussions/7267)). + +- Added `FileDownloadRequestEvent.ThumbError` field that is populated in case of a thumb generation failure (e.g. unsupported format, timing out, etc.), allowing developers to reject the thumb fallback and/or supply their own custom thumb generation ([#7268](https://github.com/pocketbase/pocketbase/discussions/7268)). + +- ⚠️ Disallow client-side filtering and sorting of relations where the collection of the last targeted relation field has superusers-only List/Search API rule to further minimize the risk of eventual side-channel attack. + _This should be a non-breaking change for most users, but if you want the old behavior, please open a new Q&A discussion with details about your use case to evaluate making it configurable._ + _Note also that as mentioned in the "Security and performance" section of [#4417](https://github.com/pocketbase/pocketbase/discussions/4417) and [#5863](https://github.com/pocketbase/pocketbase/discussions/5863), the easiest and recommended solution to protect security sensitive fields (tokens, codes, passwords, etc.) is to mark them as "Hidden" (aka. make them non-API filterable)._ + +- Regenerated JSVM types and updated npm and Go deps. + + +## v0.30.4 + +- Fixed `json` field CSS regression introduced with the overflow workaround in v0.30.3 ([#7259](https://github.com/pocketbase/pocketbase/issues/7259)). + + +## v0.30.3 + +- Fixed legacy identitity field priority check when a username is a valid email address ([#7256](https://github.com/pocketbase/pocketbase/issues/7256)). + +- Workaround autocomplete overflow issue with Firefox 144 ([#7223](https://github.com/pocketbase/pocketbase/issues/7223)). + +- Updated `modernc.org/sqlite` to 1.39.1 (SQLite 3.50.4). + + +## v0.30.2 + +- Bumped min Go GitHub action version to 1.24.8 since it comes with some [minor security fixes](https://github.com/golang/go/issues?q=milestone%3AGo1.24.8+label%3ACherryPickApproved). + + +## v0.30.1 + +- ⚠️ Excluded the `lost+found` directory from the backups ([#7208](https://github.com/pocketbase/pocketbase/pull/7208); thanks @lbndev). + _If for some reason you want to keep it, you can restore it by editing the `e.Exclude` list of the `OnBackupCreate` and `OnBackupRestore` hooks._ + +- Minor tests improvements (disabled initial superuser creation for the test app to avoid cluttering the std output, added more tests for the `s3.Uploader.MaxConcurrency`, etc.). + +- Updated `modernc.org/sqlite` and other Go dependencies. + + ## v0.30.0 - Eagerly escape the S3 request path following the same rules as in the S3 signing header ([#7153](https://github.com/pocketbase/pocketbase/issues/7153)). diff --git a/PocketBase/pb_data/auxiliary.db b/PocketBase/pb_data/auxiliary.db index 9cb795c..7368bef 100644 Binary files a/PocketBase/pb_data/auxiliary.db and b/PocketBase/pb_data/auxiliary.db differ diff --git a/PocketBase/pb_data/auxiliary.db-shm b/PocketBase/pb_data/auxiliary.db-shm new file mode 100644 index 0000000..f8f89c2 Binary files /dev/null and b/PocketBase/pb_data/auxiliary.db-shm differ diff --git a/PocketBase/pb_data/auxiliary.db-wal b/PocketBase/pb_data/auxiliary.db-wal new file mode 100644 index 0000000..f33834a Binary files /dev/null and b/PocketBase/pb_data/auxiliary.db-wal differ diff --git a/PocketBase/pb_data/data.db b/PocketBase/pb_data/data.db index 2db2d01..7794700 100644 Binary files a/PocketBase/pb_data/data.db and b/PocketBase/pb_data/data.db differ diff --git a/PocketBase/pb_data/data.db-shm b/PocketBase/pb_data/data.db-shm new file mode 100644 index 0000000..fe9ac28 Binary files /dev/null and b/PocketBase/pb_data/data.db-shm differ diff --git a/PocketBase/pb_data/data.db-wal b/PocketBase/pb_data/data.db-wal new file mode 100644 index 0000000..e69de29 diff --git a/PocketBase/pb_data/types.d.ts b/PocketBase/pb_data/types.d.ts index 3032b9b..f49ceab 100644 --- a/PocketBase/pb_data/types.d.ts +++ b/PocketBase/pb_data/types.d.ts @@ -1,4 +1,4 @@ -// 1750492181 +// 1757184118 // GENERATED CODE - DO NOT MODIFY BY HAND // ------------------------------------------------------------------- @@ -141,8 +141,6 @@ declare var $app: PocketBase * ).render({"name": "John"}) * ``` * - * _Note that this method is available only in pb_hooks context._ - * * @namespace * @group PocketBase */ @@ -243,7 +241,9 @@ declare function arrayOf(model: T): Array; /** * DynamicModel creates a new dynamic model with fields from the provided data shape. * - * Note that in order to use 0 as double/float initialization number you have to use negative zero (`-0`). + * Caveats: + * - In order to use 0 as double/float initialization number you have to negate it (`-0`). + * - You need to use lowerCamelCase when accessing the model fields (e.g. `model.roles` and not `model.Roles`). * * Example: * @@ -253,7 +253,7 @@ declare function arrayOf(model: T): Array; * age: 0, // int64 * totalSpent: -0, // float64 * active: false, - * roles: [], + * Roles: [], // maps to "Roles" in the DB/JSON but the prop would be accessible via "model.roles" * meta: {} * }) * ``` @@ -911,21 +911,23 @@ declare namespace $os { */ export let args: Array - export let exit: os.exit - export let getenv: os.getenv - export let dirFS: os.dirFS - export let readFile: os.readFile - export let writeFile: os.writeFile - export let stat: os.stat - export let readDir: os.readDir - export let tempDir: os.tempDir - export let truncate: os.truncate - export let getwd: os.getwd - export let mkdir: os.mkdir - export let mkdirAll: os.mkdirAll - export let rename: os.rename - export let remove: os.remove - export let removeAll: os.removeAll + export let exit: os.exit + export let getenv: os.getenv + export let dirFS: os.dirFS + export let readFile: os.readFile + export let writeFile: os.writeFile + export let stat: os.stat + export let readDir: os.readDir + export let tempDir: os.tempDir + export let truncate: os.truncate + export let getwd: os.getwd + export let mkdir: os.mkdir + export let mkdirAll: os.mkdirAll + export let rename: os.rename + export let remove: os.remove + export let removeAll: os.removeAll + export let openRoot: os.openRoot + export let openInRoot: os.openInRoot } // ------------------------------------------------------------------- @@ -1375,6 +1377,9 @@ namespace os { * * Symbolic links in dir are followed. * + * New files added to fsys (including if dir is a subdirectory of fsys) + * while CopyFS is running are not guaranteed to be copied. + * * Copying stops at and returns the first error encountered. */ (dir: string, fsys: fs.FS): void @@ -1802,8 +1807,8 @@ namespace os { * than ReadFrom. This is used to permit ReadFrom to call io.Copy * without leading to a recursive call to ReadFrom. */ - type _slvUKNx = noReadFrom&File - interface fileWithoutReadFrom extends _slvUKNx { + type _sMfsKhK = noReadFrom&File + interface fileWithoutReadFrom extends _sMfsKhK { } interface File { /** @@ -1847,8 +1852,8 @@ namespace os { * than WriteTo. This is used to permit WriteTo to call io.Copy * without leading to a recursive call to WriteTo. */ - type _sGqrSOp = noWriteTo&File - interface fileWithoutWriteTo extends _sGqrSOp { + type _swCDegm = noWriteTo&File + interface fileWithoutWriteTo extends _swCDegm { } interface File { /** @@ -1897,6 +1902,7 @@ namespace os { * it is truncated. If the file does not exist, it is created with mode 0o666 * (before umask). If successful, methods on the returned File can * be used for I/O; the associated file descriptor has mode O_RDWR. + * The directory containing the file must already exist. * If there is an error, it will be of type *PathError. */ (name: string): (File) @@ -1906,7 +1912,8 @@ namespace os { * OpenFile is the generalized open call; most users will use Open * or Create instead. It opens the named file with specified flag * (O_RDONLY etc.). If the file does not exist, and the O_CREATE flag - * is passed, it is created with mode perm (before umask). If successful, + * is passed, it is created with mode perm (before umask); + * the containing directory must exist. If successful, * methods on the returned File can be used for I/O. * If there is an error, it will be of type *PathError. */ @@ -1916,6 +1923,7 @@ namespace os { /** * Rename renames (moves) oldpath to newpath. * If newpath already exists and is not a directory, Rename replaces it. + * If newpath already exists and is a directory, Rename returns an error. * OS-specific restrictions may apply when oldpath and newpath are in different directories. * Even within the same directory, on non-Unix platforms Rename is not an atomic operation. * If there is an error, it will be of type *LinkError. @@ -1959,8 +1967,8 @@ namespace os { * On Windows, it returns %LocalAppData%. * On Plan 9, it returns $home/lib/cache. * - * If the location cannot be determined (for example, $HOME is not defined), - * then it will return an error. + * If the location cannot be determined (for example, $HOME is not defined) or + * the path in $XDG_CACHE_HOME is relative, then it will return an error. */ (): string } @@ -1977,8 +1985,8 @@ namespace os { * On Windows, it returns %AppData%. * On Plan 9, it returns $home/lib. * - * If the location cannot be determined (for example, $HOME is not defined), - * then it will return an error. + * If the location cannot be determined (for example, $HOME is not defined) or + * the path in $XDG_CONFIG_HOME is relative, then it will return an error. */ (): string } @@ -2094,6 +2102,8 @@ namespace os { * a general substitute for a chroot-style security mechanism when the directory tree * contains arbitrary content. * + * Use [Root.FS] to obtain a fs.FS that prevents escapes from the tree via symbolic links. + * * The directory dir must not be "". * * The result implements [io/fs.StatFS], [io/fs.ReadFileFS] and @@ -2314,10 +2324,14 @@ namespace os { } interface getwd { /** - * Getwd returns a rooted path name corresponding to the + * Getwd returns an absolute path name corresponding to the * current directory. If the current directory can be * reached via multiple paths (due to symbolic links), * Getwd may return any one of them. + * + * On Unix platforms, if the environment variable PWD + * provides an absolute name, and it is a name of the + * current directory, it is returned. */ (): string } @@ -2421,6 +2435,183 @@ namespace os { interface rawConn { write(f: (_arg0: number) => boolean): void } + interface openInRoot { + /** + * OpenInRoot opens the file name in the directory dir. + * It is equivalent to OpenRoot(dir) followed by opening the file in the root. + * + * OpenInRoot returns an error if any component of the name + * references a location outside of dir. + * + * See [Root] for details and limitations. + */ + (dir: string, name: string): (File) + } + /** + * Root may be used to only access files within a single directory tree. + * + * Methods on Root can only access files and directories beneath a root directory. + * If any component of a file name passed to a method of Root references a location + * outside the root, the method returns an error. + * File names may reference the directory itself (.). + * + * Methods on Root will follow symbolic links, but symbolic links may not + * reference a location outside the root. + * Symbolic links must not be absolute. + * + * Methods on Root do not prohibit traversal of filesystem boundaries, + * Linux bind mounts, /proc special files, or access to Unix device files. + * + * Methods on Root are safe to be used from multiple goroutines simultaneously. + * + * On most platforms, creating a Root opens a file descriptor or handle referencing + * the directory. If the directory is moved, methods on Root reference the original + * directory in its new location. + * + * Root's behavior differs on some platforms: + * + * ``` + * - When GOOS=windows, file names may not reference Windows reserved device names + * such as NUL and COM1. + * - When GOOS=js, Root is vulnerable to TOCTOU (time-of-check-time-of-use) + * attacks in symlink validation, and cannot ensure that operations will not + * escape the root. + * - When GOOS=plan9 or GOOS=js, Root does not track directories across renames. + * On these platforms, a Root references a directory name, not a file descriptor. + * ``` + */ + interface Root { + } + interface openRoot { + /** + * OpenRoot opens the named directory. + * If there is an error, it will be of type *PathError. + */ + (name: string): (Root) + } + interface Root { + /** + * Name returns the name of the directory presented to OpenRoot. + * + * It is safe to call Name after [Close]. + */ + name(): string + } + interface Root { + /** + * Close closes the Root. + * After Close is called, methods on Root return errors. + */ + close(): void + } + interface Root { + /** + * Open opens the named file in the root for reading. + * See [Open] for more details. + */ + open(name: string): (File) + } + interface Root { + /** + * Create creates or truncates the named file in the root. + * See [Create] for more details. + */ + create(name: string): (File) + } + interface Root { + /** + * OpenFile opens the named file in the root. + * See [OpenFile] for more details. + * + * If perm contains bits other than the nine least-significant bits (0o777), + * OpenFile returns an error. + */ + openFile(name: string, flag: number, perm: FileMode): (File) + } + interface Root { + /** + * OpenRoot opens the named directory in the root. + * If there is an error, it will be of type *PathError. + */ + openRoot(name: string): (Root) + } + interface Root { + /** + * Mkdir creates a new directory in the root + * with the specified name and permission bits (before umask). + * See [Mkdir] for more details. + * + * If perm contains bits other than the nine least-significant bits (0o777), + * OpenFile returns an error. + */ + mkdir(name: string, perm: FileMode): void + } + interface Root { + /** + * Remove removes the named file or (empty) directory in the root. + * See [Remove] for more details. + */ + remove(name: string): void + } + interface Root { + /** + * Stat returns a [FileInfo] describing the named file in the root. + * See [Stat] for more details. + */ + stat(name: string): FileInfo + } + interface Root { + /** + * Lstat returns a [FileInfo] describing the named file in the root. + * If the file is a symbolic link, the returned FileInfo + * describes the symbolic link. + * See [Lstat] for more details. + */ + lstat(name: string): FileInfo + } + interface Root { + /** + * FS returns a file system (an fs.FS) for the tree of files in the root. + * + * The result implements [io/fs.StatFS], [io/fs.ReadFileFS] and + * [io/fs.ReadDirFS]. + */ + fs(): fs.FS + } + interface rootFS extends Root{} + interface rootFS { + open(name: string): fs.File + } + interface rootFS { + readDir(name: string): Array + } + interface rootFS { + readFile(name: string): string|Array + } + interface rootFS { + stat(name: string): FileInfo + } + /** + * root implementation for platforms with a function to open a file + * relative to a directory. + */ + interface root { + } + interface root { + close(): void + } + interface root { + name(): string + } + /** + * errSymlink reports that a file being operated on is actually a symlink, + * and the target of that symlink. + */ + interface errSymlink extends String{} + interface errSymlink { + error(): string + } + interface sysfdType extends Number{} interface stat { /** * Stat returns a [FileInfo] describing the named file. @@ -2492,8 +2683,8 @@ namespace os { * * The methods of File are safe for concurrent use. */ - type _sgluwGO = file - interface File extends _sgluwGO { + type _sPrVXpP = file + interface File extends _sPrVXpP { } /** * A FileInfo describes a file and is returned by [Stat] and [Lstat]. @@ -2885,118 +3076,366 @@ namespace filepath { } } -/** - * Package validation provides configurable and extensible rules for validating data of various types. - */ -namespace ozzo_validation { - /** - * Error interface represents an validation error - */ - interface Error { - [key:string]: any; - error(): string - code(): string - message(): string - setMessage(_arg0: string): Error - params(): _TygojaDict - setParams(_arg0: _TygojaDict): Error - } -} - -/** - * Package dbx provides a set of DB-agnostic and easy-to-use query building methods for relational databases. - */ -namespace dbx { - /** - * Builder supports building SQL statements in a DB-agnostic way. - * Builder mainly provides two sets of query building methods: those building SELECT statements - * and those manipulating DB data or schema (e.g. INSERT statements, CREATE TABLE statements). - */ - interface Builder { - [key:string]: any; +namespace security { + interface s256Challenge { /** - * NewQuery creates a new Query object with the given SQL statement. - * The SQL statement may contain parameter placeholders which can be bound with actual parameter - * values before the statement is executed. + * S256Challenge creates base64 encoded sha256 challenge string derived from code. + * The padding of the result base64 string is stripped per [RFC 7636]. + * + * [RFC 7636]: https://datatracker.ietf.org/doc/html/rfc7636#section-4.2 */ - newQuery(_arg0: string): (Query) + (code: string): string + } + interface md5 { /** - * Select returns a new SelectQuery object that can be used to build a SELECT statement. - * The parameters to this method should be the list column names to be selected. - * A column name may have an optional alias name. For example, Select("id", "my_name AS name"). + * MD5 creates md5 hash from the provided plain text. */ - select(..._arg0: string[]): (SelectQuery) + (text: string): string + } + interface sha256 { /** - * ModelQuery returns a new ModelQuery object that can be used to perform model insertion, update, and deletion. - * The parameter to this method should be a pointer to the model struct that needs to be inserted, updated, or deleted. + * SHA256 creates sha256 hash as defined in FIPS 180-4 from the provided text. */ - model(_arg0: { - }): (ModelQuery) + (text: string): string + } + interface sha512 { /** - * GeneratePlaceholder generates an anonymous parameter placeholder with the given parameter ID. + * SHA512 creates sha512 hash as defined in FIPS 180-4 from the provided text. */ - generatePlaceholder(_arg0: number): string + (text: string): string + } + interface hs256 { /** - * Quote quotes a string so that it can be embedded in a SQL statement as a string value. + * HS256 creates a HMAC hash with sha256 digest algorithm. */ - quote(_arg0: string): string + (text: string, secret: string): string + } + interface hs512 { /** - * QuoteSimpleTableName quotes a simple table name. - * A simple table name does not contain any schema prefix. + * HS512 creates a HMAC hash with sha512 digest algorithm. */ - quoteSimpleTableName(_arg0: string): string + (text: string, secret: string): string + } + interface equal { /** - * QuoteSimpleColumnName quotes a simple column name. - * A simple column name does not contain any table prefix. + * Equal compares two hash strings for equality without leaking timing information. */ - quoteSimpleColumnName(_arg0: string): string + (hash1: string, hash2: string): boolean + } + // @ts-ignore + import crand = rand + interface encrypt { /** - * QueryBuilder returns the query builder supporting the current DB. + * Encrypt encrypts "data" with the specified "key" (must be valid 32 char AES key). + * + * This method uses AES-256-GCM block cypher mode. */ - queryBuilder(): QueryBuilder + (data: string|Array, key: string): string + } + interface decrypt { /** - * Insert creates a Query that represents an INSERT SQL statement. - * The keys of cols are the column names, while the values of cols are the corresponding column - * values to be inserted. + * Decrypt decrypts encrypted text with key (must be valid 32 chars AES key). + * + * This method uses AES-256-GCM block cypher mode. */ - insert(table: string, cols: Params): (Query) + (cipherText: string, key: string): string|Array + } + interface parseUnverifiedJWT { /** - * Upsert creates a Query that represents an UPSERT SQL statement. - * Upsert inserts a row into the table if the primary key or unique index is not found. - * Otherwise it will update the row with the new values. - * The keys of cols are the column names, while the values of cols are the corresponding column - * values to be inserted. + * ParseUnverifiedJWT parses JWT and returns its claims + * but DOES NOT verify the signature. + * + * It verifies only the exp, iat and nbf claims. */ - upsert(table: string, cols: Params, ...constraints: string[]): (Query) + (token: string): jwt.MapClaims + } + interface parseJWT { /** - * Update creates a Query that represents an UPDATE SQL statement. - * The keys of cols are the column names, while the values of cols are the corresponding new column - * values. If the "where" expression is nil, the UPDATE SQL statement will have no WHERE clause - * (be careful in this case as the SQL statement will update ALL rows in the table). + * ParseJWT verifies and parses JWT and returns its claims. */ - update(table: string, cols: Params, where: Expression): (Query) + (token: string, verificationKey: string): jwt.MapClaims + } + interface newJWT { /** - * Delete creates a Query that represents a DELETE SQL statement. - * If the "where" expression is nil, the DELETE SQL statement will have no WHERE clause - * (be careful in this case as the SQL statement will delete ALL rows in the table). + * NewJWT generates and returns new HS256 signed JWT. */ - delete(table: string, where: Expression): (Query) + (payload: jwt.MapClaims, signingKey: string, duration: time.Duration): string + } + // @ts-ignore + import cryptoRand = rand + // @ts-ignore + import mathRand = rand + interface randomString { /** - * CreateTable creates a Query that represents a CREATE TABLE SQL statement. - * The keys of cols are the column names, while the values of cols are the corresponding column types. - * The optional "options" parameters will be appended to the generated SQL statement. + * RandomString generates a cryptographically random string with the specified length. + * + * The generated string matches [A-Za-z0-9]+ and it's transparent to URL-encoding. */ - createTable(table: string, cols: _TygojaDict, ...options: string[]): (Query) + (length: number): string + } + interface randomStringWithAlphabet { /** - * RenameTable creates a Query that can be used to rename a table. + * RandomStringWithAlphabet generates a cryptographically random string + * with the specified length and characters set. + * + * It panics if for some reason rand.Int returns a non-nil error. */ - renameTable(oldName: string, newName: string): (Query) + (length: number, alphabet: string): string + } + interface pseudorandomString { /** - * DropTable creates a Query that can be used to drop a table. + * PseudorandomString generates a pseudorandom string with the specified length. + * + * The generated string matches [A-Za-z0-9]+ and it's transparent to URL-encoding. + * + * For a cryptographically random string (but a little bit slower) use RandomString instead. */ - dropTable(table: string): (Query) + (length: number): string + } + interface pseudorandomStringWithAlphabet { /** - * TruncateTable creates a Query that can be used to truncate a table. + * PseudorandomStringWithAlphabet generates a pseudorandom string + * with the specified length and characters set. + * + * For a cryptographically random (but a little bit slower) use RandomStringWithAlphabet instead. + */ + (length: number, alphabet: string): string + } + interface randomStringByRegex { + /** + * RandomStringByRegex generates a random string matching the regex pattern. + * If optFlags is not set, fallbacks to [syntax.Perl]. + * + * NB! While the source of the randomness comes from [crypto/rand] this method + * is not recommended to be used on its own in critical secure contexts because + * the generated length could vary too much on the used pattern and may not be + * as secure as simply calling [security.RandomString]. + * If you still insist on using it for such purposes, consider at least + * a large enough minimum length for the generated string, e.g. `[a-z0-9]{30}`. + * + * This function is inspired by github.com/pipe01/revregexp, github.com/lucasjones/reggen and other similar packages. + */ + (pattern: string, ...optFlags: syntax.Flags[]): string + } +} + +/** + * Package template is a thin wrapper around the standard html/template + * and text/template packages that implements a convenient registry to + * load and cache templates on the fly concurrently. + * + * It was created to assist the JSVM plugin HTML rendering, but could be used in other Go code. + * + * Example: + * + * ``` + * registry := template.NewRegistry() + * + * html1, err := registry.LoadFiles( + * // the files set wil be parsed only once and then cached + * "layout.html", + * "content.html", + * ).Render(map[string]any{"name": "John"}) + * + * html2, err := registry.LoadFiles( + * // reuse the already parsed and cached files set + * "layout.html", + * "content.html", + * ).Render(map[string]any{"name": "Jane"}) + * ``` + */ +namespace template { + interface newRegistry { + /** + * NewRegistry creates and initializes a new templates registry with + * some defaults (eg. global "raw" template function for unescaped HTML). + * + * Use the Registry.Load* methods to load templates into the registry. + */ + (): (Registry) + } + /** + * Registry defines a templates registry that is safe to be used by multiple goroutines. + * + * Use the Registry.Load* methods to load templates into the registry. + */ + interface Registry { + } + interface Registry { + /** + * AddFuncs registers new global template functions. + * + * The key of each map entry is the function name that will be used in the templates. + * If a function with the map entry name already exists it will be replaced with the new one. + * + * The value of each map entry is a function that must have either a + * single return value, or two return values of which the second has type error. + * + * Example: + * + * ``` + * r.AddFuncs(map[string]any{ + * "toUpper": func(str string) string { + * return strings.ToUppser(str) + * }, + * ... + * }) + * ``` + */ + addFuncs(funcs: _TygojaDict): (Registry) + } + interface Registry { + /** + * LoadFiles caches (if not already) the specified filenames set as a + * single template and returns a ready to use Renderer instance. + * + * There must be at least 1 filename specified. + */ + loadFiles(...filenames: string[]): (Renderer) + } + interface Registry { + /** + * LoadString caches (if not already) the specified inline string as a + * single template and returns a ready to use Renderer instance. + */ + loadString(text: string): (Renderer) + } + interface Registry { + /** + * LoadFS caches (if not already) the specified fs and globPatterns + * pair as single template and returns a ready to use Renderer instance. + * + * There must be at least 1 file matching the provided globPattern(s) + * (note that most file names serves as glob patterns matching themselves). + */ + loadFS(fsys: fs.FS, ...globPatterns: string[]): (Renderer) + } + /** + * Renderer defines a single parsed template. + */ + interface Renderer { + } + interface Renderer { + /** + * Render executes the template with the specified data as the dot object + * and returns the result as plain string. + */ + render(data: any): string + } +} + +/** + * Package validation provides configurable and extensible rules for validating data of various types. + */ +namespace ozzo_validation { + /** + * Error interface represents an validation error + */ + interface Error { + [key:string]: any; + error(): string + code(): string + message(): string + setMessage(_arg0: string): Error + params(): _TygojaDict + setParams(_arg0: _TygojaDict): Error + } +} + +/** + * Package dbx provides a set of DB-agnostic and easy-to-use query building methods for relational databases. + */ +namespace dbx { + /** + * Builder supports building SQL statements in a DB-agnostic way. + * Builder mainly provides two sets of query building methods: those building SELECT statements + * and those manipulating DB data or schema (e.g. INSERT statements, CREATE TABLE statements). + */ + interface Builder { + [key:string]: any; + /** + * NewQuery creates a new Query object with the given SQL statement. + * The SQL statement may contain parameter placeholders which can be bound with actual parameter + * values before the statement is executed. + */ + newQuery(_arg0: string): (Query) + /** + * Select returns a new SelectQuery object that can be used to build a SELECT statement. + * The parameters to this method should be the list column names to be selected. + * A column name may have an optional alias name. For example, Select("id", "my_name AS name"). + */ + select(..._arg0: string[]): (SelectQuery) + /** + * ModelQuery returns a new ModelQuery object that can be used to perform model insertion, update, and deletion. + * The parameter to this method should be a pointer to the model struct that needs to be inserted, updated, or deleted. + */ + model(_arg0: { + }): (ModelQuery) + /** + * GeneratePlaceholder generates an anonymous parameter placeholder with the given parameter ID. + */ + generatePlaceholder(_arg0: number): string + /** + * Quote quotes a string so that it can be embedded in a SQL statement as a string value. + */ + quote(_arg0: string): string + /** + * QuoteSimpleTableName quotes a simple table name. + * A simple table name does not contain any schema prefix. + */ + quoteSimpleTableName(_arg0: string): string + /** + * QuoteSimpleColumnName quotes a simple column name. + * A simple column name does not contain any table prefix. + */ + quoteSimpleColumnName(_arg0: string): string + /** + * QueryBuilder returns the query builder supporting the current DB. + */ + queryBuilder(): QueryBuilder + /** + * Insert creates a Query that represents an INSERT SQL statement. + * The keys of cols are the column names, while the values of cols are the corresponding column + * values to be inserted. + */ + insert(table: string, cols: Params): (Query) + /** + * Upsert creates a Query that represents an UPSERT SQL statement. + * Upsert inserts a row into the table if the primary key or unique index is not found. + * Otherwise it will update the row with the new values. + * The keys of cols are the column names, while the values of cols are the corresponding column + * values to be inserted. + */ + upsert(table: string, cols: Params, ...constraints: string[]): (Query) + /** + * Update creates a Query that represents an UPDATE SQL statement. + * The keys of cols are the column names, while the values of cols are the corresponding new column + * values. If the "where" expression is nil, the UPDATE SQL statement will have no WHERE clause + * (be careful in this case as the SQL statement will update ALL rows in the table). + */ + update(table: string, cols: Params, where: Expression): (Query) + /** + * Delete creates a Query that represents a DELETE SQL statement. + * If the "where" expression is nil, the DELETE SQL statement will have no WHERE clause + * (be careful in this case as the SQL statement will delete ALL rows in the table). + */ + delete(table: string, where: Expression): (Query) + /** + * CreateTable creates a Query that represents a CREATE TABLE SQL statement. + * The keys of cols are the column names, while the values of cols are the corresponding column types. + * The optional "options" parameters will be appended to the generated SQL statement. + */ + createTable(table: string, cols: _TygojaDict, ...options: string[]): (Query) + /** + * RenameTable creates a Query that can be used to rename a table. + */ + renameTable(oldName: string, newName: string): (Query) + /** + * DropTable creates a Query that can be used to drop a table. + */ + dropTable(table: string): (Query) + /** + * TruncateTable creates a Query that can be used to truncate a table. */ truncateTable(table: string): (Query) /** @@ -3239,14 +3678,14 @@ namespace dbx { /** * MssqlBuilder is the builder for SQL Server databases. */ - type _sMQOVvy = BaseBuilder - interface MssqlBuilder extends _sMQOVvy { + type _sOgBiBI = BaseBuilder + interface MssqlBuilder extends _sOgBiBI { } /** * MssqlQueryBuilder is the query builder for SQL Server databases. */ - type _sdzZflg = BaseQueryBuilder - interface MssqlQueryBuilder extends _sdzZflg { + type _sRHlmCE = BaseQueryBuilder + interface MssqlQueryBuilder extends _sRHlmCE { } interface newMssqlBuilder { /** @@ -3317,8 +3756,8 @@ namespace dbx { /** * MysqlBuilder is the builder for MySQL databases. */ - type _siQJCaD = BaseBuilder - interface MysqlBuilder extends _siQJCaD { + type _syGNUZb = BaseBuilder + interface MysqlBuilder extends _syGNUZb { } interface newMysqlBuilder { /** @@ -3393,14 +3832,14 @@ namespace dbx { /** * OciBuilder is the builder for Oracle databases. */ - type _sQGrzVB = BaseBuilder - interface OciBuilder extends _sQGrzVB { + type _ssjJHcd = BaseBuilder + interface OciBuilder extends _ssjJHcd { } /** * OciQueryBuilder is the query builder for Oracle databases. */ - type _sINWAUj = BaseQueryBuilder - interface OciQueryBuilder extends _sINWAUj { + type _spYCyyo = BaseQueryBuilder + interface OciQueryBuilder extends _spYCyyo { } interface newOciBuilder { /** @@ -3463,8 +3902,8 @@ namespace dbx { /** * PgsqlBuilder is the builder for PostgreSQL databases. */ - type _sCvSmEV = BaseBuilder - interface PgsqlBuilder extends _sCvSmEV { + type _sECoPYn = BaseBuilder + interface PgsqlBuilder extends _sECoPYn { } interface newPgsqlBuilder { /** @@ -3531,8 +3970,8 @@ namespace dbx { /** * SqliteBuilder is the builder for SQLite databases. */ - type _sJSOFqA = BaseBuilder - interface SqliteBuilder extends _sJSOFqA { + type _smFOrtL = BaseBuilder + interface SqliteBuilder extends _smFOrtL { } interface newSqliteBuilder { /** @@ -3631,8 +4070,8 @@ namespace dbx { /** * StandardBuilder is the builder that is used by DB for an unknown driver. */ - type _sOzLvzF = BaseBuilder - interface StandardBuilder extends _sOzLvzF { + type _sJbPsvb = BaseBuilder + interface StandardBuilder extends _sJbPsvb { } interface newStandardBuilder { /** @@ -3698,8 +4137,8 @@ namespace dbx { * DB enhances sql.DB by providing a set of DB-agnostic query building methods. * DB allows easier query building and population of data into Go variables. */ - type _sElkCGm = Builder - interface DB extends _sElkCGm { + type _shaNpOq = Builder + interface DB extends _shaNpOq { /** * FieldMapper maps struct fields to DB columns. Defaults to DefaultFieldMapFunc. */ @@ -4503,8 +4942,8 @@ namespace dbx { * Rows enhances sql.Rows by providing additional data query methods. * Rows can be obtained by calling Query.Rows(). It is mainly used to populate data row by row. */ - type _suxYfvT = sql.Rows - interface Rows extends _suxYfvT { + type _sKlUpIl = sql.Rows + interface Rows extends _sKlUpIl { } interface Rows { /** @@ -4876,8 +5315,8 @@ namespace dbx { }): string } interface structInfo { } - type _sWCgpKn = structInfo - interface structValue extends _sWCgpKn { + type _sjTuviq = structInfo + interface structValue extends _sjTuviq { } interface fieldInfo { } @@ -4916,8 +5355,8 @@ namespace dbx { /** * Tx enhances sql.Tx with additional querying methods. */ - type _sEknhqZ = Builder - interface Tx extends _sEknhqZ { + type _sxeFDlS = Builder + interface Tx extends _sxeFDlS { } interface Tx { /** @@ -4933,149 +5372,6 @@ namespace dbx { } } -namespace security { - interface s256Challenge { - /** - * S256Challenge creates base64 encoded sha256 challenge string derived from code. - * The padding of the result base64 string is stripped per [RFC 7636]. - * - * [RFC 7636]: https://datatracker.ietf.org/doc/html/rfc7636#section-4.2 - */ - (code: string): string - } - interface md5 { - /** - * MD5 creates md5 hash from the provided plain text. - */ - (text: string): string - } - interface sha256 { - /** - * SHA256 creates sha256 hash as defined in FIPS 180-4 from the provided text. - */ - (text: string): string - } - interface sha512 { - /** - * SHA512 creates sha512 hash as defined in FIPS 180-4 from the provided text. - */ - (text: string): string - } - interface hs256 { - /** - * HS256 creates a HMAC hash with sha256 digest algorithm. - */ - (text: string, secret: string): string - } - interface hs512 { - /** - * HS512 creates a HMAC hash with sha512 digest algorithm. - */ - (text: string, secret: string): string - } - interface equal { - /** - * Equal compares two hash strings for equality without leaking timing information. - */ - (hash1: string, hash2: string): boolean - } - // @ts-ignore - import crand = rand - interface encrypt { - /** - * Encrypt encrypts "data" with the specified "key" (must be valid 32 char AES key). - * - * This method uses AES-256-GCM block cypher mode. - */ - (data: string|Array, key: string): string - } - interface decrypt { - /** - * Decrypt decrypts encrypted text with key (must be valid 32 chars AES key). - * - * This method uses AES-256-GCM block cypher mode. - */ - (cipherText: string, key: string): string|Array - } - interface parseUnverifiedJWT { - /** - * ParseUnverifiedJWT parses JWT and returns its claims - * but DOES NOT verify the signature. - * - * It verifies only the exp, iat and nbf claims. - */ - (token: string): jwt.MapClaims - } - interface parseJWT { - /** - * ParseJWT verifies and parses JWT and returns its claims. - */ - (token: string, verificationKey: string): jwt.MapClaims - } - interface newJWT { - /** - * NewJWT generates and returns new HS256 signed JWT. - */ - (payload: jwt.MapClaims, signingKey: string, duration: time.Duration): string - } - // @ts-ignore - import cryptoRand = rand - // @ts-ignore - import mathRand = rand - interface randomString { - /** - * RandomString generates a cryptographically random string with the specified length. - * - * The generated string matches [A-Za-z0-9]+ and it's transparent to URL-encoding. - */ - (length: number): string - } - interface randomStringWithAlphabet { - /** - * RandomStringWithAlphabet generates a cryptographically random string - * with the specified length and characters set. - * - * It panics if for some reason rand.Int returns a non-nil error. - */ - (length: number, alphabet: string): string - } - interface pseudorandomString { - /** - * PseudorandomString generates a pseudorandom string with the specified length. - * - * The generated string matches [A-Za-z0-9]+ and it's transparent to URL-encoding. - * - * For a cryptographically random string (but a little bit slower) use RandomString instead. - */ - (length: number): string - } - interface pseudorandomStringWithAlphabet { - /** - * PseudorandomStringWithAlphabet generates a pseudorandom string - * with the specified length and characters set. - * - * For a cryptographically random (but a little bit slower) use RandomStringWithAlphabet instead. - */ - (length: number, alphabet: string): string - } - interface randomStringByRegex { - /** - * RandomStringByRegex generates a random string matching the regex pattern. - * If optFlags is not set, fallbacks to [syntax.Perl]. - * - * NB! While the source of the randomness comes from [crypto/rand] this method - * is not recommended to be used on its own in critical secure contexts because - * the generated length could vary too much on the used pattern and may not be - * as secure as simply calling [security.RandomString]. - * If you still insist on using it for such purposes, consider at least - * a large enough minimum length for the generated string, e.g. `[a-z0-9]{30}`. - * - * This function is inspired by github.com/pipe01/revregexp, github.com/lucasjones/reggen and other similar packages. - */ - (pattern: string, ...optFlags: syntax.Flags[]): string - } -} - namespace filesystem { /** * FileReader defines an interface for a file resource reader. @@ -5172,8 +5468,8 @@ namespace filesystem { */ open(): io.ReadSeekCloser } - type _sGXtNgV = bytes.Reader - interface bytesReadSeekCloser extends _sGXtNgV { + type _ssuzizV = bytes.Reader + interface bytesReadSeekCloser extends _ssuzizV { } interface bytesReadSeekCloser { /** @@ -7138,8 +7434,8 @@ namespace core { /** * AuthOrigin defines a Record proxy for working with the authOrigins collection. */ - type _szBYLhp = Record - interface AuthOrigin extends _szBYLhp { + type _skGzCCS = Record + interface AuthOrigin extends _skGzCCS { } interface newAuthOrigin { /** @@ -7884,8 +8180,8 @@ namespace core { /** * @todo experiment eventually replacing the rules *string with a struct? */ - type _sWkvAgL = BaseModel - interface baseCollection extends _sWkvAgL { + type _soRWXRx = BaseModel + interface baseCollection extends _soRWXRx { listRule?: string viewRule?: string createRule?: string @@ -7912,8 +8208,8 @@ namespace core { /** * Collection defines the table, fields and various options related to a set of records. */ - type _sOHZZKe = baseCollection&collectionAuthOptions&collectionViewOptions - interface Collection extends _sOHZZKe { + type _szIYIwT = baseCollection&collectionAuthOptions&collectionViewOptions + interface Collection extends _szIYIwT { } interface newCollection { /** @@ -8923,8 +9219,8 @@ namespace core { /** * RequestEvent defines the PocketBase router handler event. */ - type _stvQwFb = router.Event - interface RequestEvent extends _stvQwFb { + type _sxBKJkw = router.Event + interface RequestEvent extends _sxBKJkw { app: App auth?: Record } @@ -8984,8 +9280,8 @@ namespace core { */ clone(): (RequestInfo) } - type _sIpKrPW = hook.Event&RequestEvent - interface BatchRequestEvent extends _sIpKrPW { + type _sDvVrNz = hook.Event&RequestEvent + interface BatchRequestEvent extends _sDvVrNz { batch: Array<(InternalRequest | undefined)> } interface InternalRequest { @@ -9022,28 +9318,34 @@ namespace core { interface baseCollectionEventData { tags(): Array } - type _sKMBsNz = hook.Event - interface BootstrapEvent extends _sKMBsNz { + type _sebjDqh = hook.Event + interface BootstrapEvent extends _sebjDqh { app: App } - type _syPQtoZ = hook.Event - interface TerminateEvent extends _syPQtoZ { + type _spHrdyF = hook.Event + interface TerminateEvent extends _spHrdyF { app: App isRestart: boolean } - type _sfiueBt = hook.Event - interface BackupEvent extends _sfiueBt { + type _sfAKDhg = hook.Event + interface BackupEvent extends _sfAKDhg { app: App context: context.Context name: string // the name of the backup to create/restore. exclude: Array // list of dir entries to exclude from the backup create/restore. } - type _sEQOjgi = hook.Event - interface ServeEvent extends _sEQOjgi { + type _sXlglRy = hook.Event + interface ServeEvent extends _sXlglRy { app: App router?: router.Router server?: http.Server certManager?: any + /** + * Listener allow specifying a custom network listener. + * + * Leave it nil to use the default net.Listen("tcp", e.Server.Addr). + */ + listener: net.Listener /** * InstallerFunc is the "installer" function that is called after * successful server tcp bind but only if there is no explicit @@ -9062,31 +9364,31 @@ namespace core { */ installerFunc: (app: App, systemSuperuser: Record, baseURL: string) => void } - type _seHJyPG = hook.Event&RequestEvent - interface SettingsListRequestEvent extends _seHJyPG { + type _svvpSLk = hook.Event&RequestEvent + interface SettingsListRequestEvent extends _svvpSLk { settings?: Settings } - type _siGVRhO = hook.Event&RequestEvent - interface SettingsUpdateRequestEvent extends _siGVRhO { + type _shmzVsJ = hook.Event&RequestEvent + interface SettingsUpdateRequestEvent extends _shmzVsJ { oldSettings?: Settings newSettings?: Settings } - type _srYLoxE = hook.Event - interface SettingsReloadEvent extends _srYLoxE { + type _siyoQOX = hook.Event + interface SettingsReloadEvent extends _siyoQOX { app: App } - type _sILwIuS = hook.Event - interface MailerEvent extends _sILwIuS { + type _sINhZRm = hook.Event + interface MailerEvent extends _sINhZRm { app: App mailer: mailer.Mailer message?: mailer.Message } - type _sXnxBfv = MailerEvent&baseRecordEventData - interface MailerRecordEvent extends _sXnxBfv { + type _szjQUKp = MailerEvent&baseRecordEventData + interface MailerRecordEvent extends _szjQUKp { meta: _TygojaDict } - type _sBozUTh = hook.Event&baseModelEventData - interface ModelEvent extends _sBozUTh { + type _ssQOsrC = hook.Event&baseModelEventData + interface ModelEvent extends _ssQOsrC { app: App context: context.Context /** @@ -9098,12 +9400,12 @@ namespace core { */ type: string } - type _slFqYSy = ModelEvent - interface ModelErrorEvent extends _slFqYSy { + type _sdivWQJ = ModelEvent + interface ModelErrorEvent extends _sdivWQJ { error: Error } - type _sEpesWL = hook.Event&baseRecordEventData - interface RecordEvent extends _sEpesWL { + type _sRNQCIp = hook.Event&baseRecordEventData + interface RecordEvent extends _sRNQCIp { app: App context: context.Context /** @@ -9115,12 +9417,12 @@ namespace core { */ type: string } - type _syOyPxo = RecordEvent - interface RecordErrorEvent extends _syOyPxo { + type _sfQSdjF = RecordEvent + interface RecordErrorEvent extends _sfQSdjF { error: Error } - type _sPJLzXM = hook.Event&baseCollectionEventData - interface CollectionEvent extends _sPJLzXM { + type _sosFrnQ = hook.Event&baseCollectionEventData + interface CollectionEvent extends _sosFrnQ { app: App context: context.Context /** @@ -9132,95 +9434,95 @@ namespace core { */ type: string } - type _sjinXTe = CollectionEvent - interface CollectionErrorEvent extends _sjinXTe { + type _soiVBES = CollectionEvent + interface CollectionErrorEvent extends _soiVBES { error: Error } - type _sljETVy = hook.Event&RequestEvent&baseRecordEventData - interface FileTokenRequestEvent extends _sljETVy { + type _sBVWKgM = hook.Event&RequestEvent&baseRecordEventData + interface FileTokenRequestEvent extends _sBVWKgM { token: string } - type _sIgbmQi = hook.Event&RequestEvent&baseCollectionEventData - interface FileDownloadRequestEvent extends _sIgbmQi { + type _sJQnQve = hook.Event&RequestEvent&baseCollectionEventData + interface FileDownloadRequestEvent extends _sJQnQve { record?: Record fileField?: FileField servedPath: string servedName: string } - type _sMeXLpW = hook.Event&RequestEvent - interface CollectionsListRequestEvent extends _sMeXLpW { + type _sFVlyME = hook.Event&RequestEvent + interface CollectionsListRequestEvent extends _sFVlyME { collections: Array<(Collection | undefined)> result?: search.Result } - type _sAAmsFE = hook.Event&RequestEvent - interface CollectionsImportRequestEvent extends _sAAmsFE { + type _sMJUwaD = hook.Event&RequestEvent + interface CollectionsImportRequestEvent extends _sMJUwaD { collectionsData: Array<_TygojaDict> deleteMissing: boolean } - type _sRisMVM = hook.Event&RequestEvent&baseCollectionEventData - interface CollectionRequestEvent extends _sRisMVM { + type _sUwTXie = hook.Event&RequestEvent&baseCollectionEventData + interface CollectionRequestEvent extends _sUwTXie { } - type _sFHfdtB = hook.Event&RequestEvent - interface RealtimeConnectRequestEvent extends _sFHfdtB { + type _suKazhh = hook.Event&RequestEvent + interface RealtimeConnectRequestEvent extends _suKazhh { client: subscriptions.Client /** * note: modifying it after the connect has no effect */ idleTimeout: time.Duration } - type _stuWeTu = hook.Event&RequestEvent - interface RealtimeMessageEvent extends _stuWeTu { + type _szFqbPJ = hook.Event&RequestEvent + interface RealtimeMessageEvent extends _szFqbPJ { client: subscriptions.Client message?: subscriptions.Message } - type _srlvooI = hook.Event&RequestEvent - interface RealtimeSubscribeRequestEvent extends _srlvooI { + type _styhHrE = hook.Event&RequestEvent + interface RealtimeSubscribeRequestEvent extends _styhHrE { client: subscriptions.Client subscriptions: Array } - type _sZdLzoS = hook.Event&RequestEvent&baseCollectionEventData - interface RecordsListRequestEvent extends _sZdLzoS { + type _snkWcHx = hook.Event&RequestEvent&baseCollectionEventData + interface RecordsListRequestEvent extends _snkWcHx { /** * @todo consider removing and maybe add as generic to the search.Result? */ records: Array<(Record | undefined)> result?: search.Result } - type _sEHnNNa = hook.Event&RequestEvent&baseCollectionEventData - interface RecordRequestEvent extends _sEHnNNa { + type _spbiGpn = hook.Event&RequestEvent&baseCollectionEventData + interface RecordRequestEvent extends _spbiGpn { record?: Record } - type _sDEhebY = hook.Event&baseRecordEventData - interface RecordEnrichEvent extends _sDEhebY { + type _sCyZxxg = hook.Event&baseRecordEventData + interface RecordEnrichEvent extends _sCyZxxg { app: App requestInfo?: RequestInfo } - type _siqByrU = hook.Event&RequestEvent&baseCollectionEventData - interface RecordCreateOTPRequestEvent extends _siqByrU { + type _sSbXKdM = hook.Event&RequestEvent&baseCollectionEventData + interface RecordCreateOTPRequestEvent extends _sSbXKdM { record?: Record password: string } - type _sxVhMUx = hook.Event&RequestEvent&baseCollectionEventData - interface RecordAuthWithOTPRequestEvent extends _sxVhMUx { + type _sjolkES = hook.Event&RequestEvent&baseCollectionEventData + interface RecordAuthWithOTPRequestEvent extends _sjolkES { record?: Record otp?: OTP } - type _sUHttoj = hook.Event&RequestEvent&baseCollectionEventData - interface RecordAuthRequestEvent extends _sUHttoj { + type _sZnSSMV = hook.Event&RequestEvent&baseCollectionEventData + interface RecordAuthRequestEvent extends _sZnSSMV { record?: Record token: string meta: any authMethod: string } - type _suFSrRh = hook.Event&RequestEvent&baseCollectionEventData - interface RecordAuthWithPasswordRequestEvent extends _suFSrRh { + type _svgSYQF = hook.Event&RequestEvent&baseCollectionEventData + interface RecordAuthWithPasswordRequestEvent extends _svgSYQF { record?: Record identity: string identityField: string password: string } - type _szJuivH = hook.Event&RequestEvent&baseCollectionEventData - interface RecordAuthWithOAuth2RequestEvent extends _szJuivH { + type _sPVfuRF = hook.Event&RequestEvent&baseCollectionEventData + interface RecordAuthWithOAuth2RequestEvent extends _sPVfuRF { providerName: string providerClient: auth.Provider record?: Record @@ -9228,41 +9530,41 @@ namespace core { createData: _TygojaDict isNewRecord: boolean } - type _sWJzjaz = hook.Event&RequestEvent&baseCollectionEventData - interface RecordAuthRefreshRequestEvent extends _sWJzjaz { + type _sLHBduJ = hook.Event&RequestEvent&baseCollectionEventData + interface RecordAuthRefreshRequestEvent extends _sLHBduJ { record?: Record } - type _sAVRUyz = hook.Event&RequestEvent&baseCollectionEventData - interface RecordRequestPasswordResetRequestEvent extends _sAVRUyz { + type _sWQGSWt = hook.Event&RequestEvent&baseCollectionEventData + interface RecordRequestPasswordResetRequestEvent extends _sWQGSWt { record?: Record } - type _siJpYAW = hook.Event&RequestEvent&baseCollectionEventData - interface RecordConfirmPasswordResetRequestEvent extends _siJpYAW { + type _sjIBOBr = hook.Event&RequestEvent&baseCollectionEventData + interface RecordConfirmPasswordResetRequestEvent extends _sjIBOBr { record?: Record } - type _slAGCiC = hook.Event&RequestEvent&baseCollectionEventData - interface RecordRequestVerificationRequestEvent extends _slAGCiC { + type _sydgvRd = hook.Event&RequestEvent&baseCollectionEventData + interface RecordRequestVerificationRequestEvent extends _sydgvRd { record?: Record } - type _sHMgtVP = hook.Event&RequestEvent&baseCollectionEventData - interface RecordConfirmVerificationRequestEvent extends _sHMgtVP { + type _sxJnCzr = hook.Event&RequestEvent&baseCollectionEventData + interface RecordConfirmVerificationRequestEvent extends _sxJnCzr { record?: Record } - type _sCqCegw = hook.Event&RequestEvent&baseCollectionEventData - interface RecordRequestEmailChangeRequestEvent extends _sCqCegw { + type _sqwbhFM = hook.Event&RequestEvent&baseCollectionEventData + interface RecordRequestEmailChangeRequestEvent extends _sqwbhFM { record?: Record newEmail: string } - type _shuDwcr = hook.Event&RequestEvent&baseCollectionEventData - interface RecordConfirmEmailChangeRequestEvent extends _shuDwcr { + type _sSqaOTT = hook.Event&RequestEvent&baseCollectionEventData + interface RecordConfirmEmailChangeRequestEvent extends _sSqaOTT { record?: Record newEmail: string } /** * ExternalAuth defines a Record proxy for working with the externalAuths collection. */ - type _sexJOWK = Record - interface ExternalAuth extends _sexJOWK { + type _skqBDtt = Record + interface ExternalAuth extends _skqBDtt { } interface newExternalAuth { /** @@ -11724,8 +12026,8 @@ namespace core { interface onlyFieldType { type: string } - type _sSoSBlS = Field - interface fieldWithType extends _sSoSBlS { + type _sJhtthb = Field + interface fieldWithType extends _sJhtthb { type: string } interface fieldWithType { @@ -11757,8 +12059,8 @@ namespace core { */ scan(value: any): void } - type _sgYUQgr = BaseModel - interface Log extends _sgYUQgr { + type _sjbEVsC = BaseModel + interface Log extends _sjbEVsC { created: types.DateTime data: types.JSONMap message: string @@ -11804,8 +12106,8 @@ namespace core { /** * MFA defines a Record proxy for working with the mfas collection. */ - type _smLjswC = Record - interface MFA extends _smLjswC { + type _sVwDiCI = Record + interface MFA extends _sVwDiCI { } interface newMFA { /** @@ -12027,8 +12329,8 @@ namespace core { /** * OTP defines a Record proxy for working with the otps collection. */ - type _srXvShH = Record - interface OTP extends _srXvShH { + type _sDKuVtM = Record + interface OTP extends _sDKuVtM { } interface newOTP { /** @@ -12264,8 +12566,8 @@ namespace core { } interface runner { } - type _smSJmDP = BaseModel - interface Record extends _smSJmDP { + type _sNOqeFa = BaseModel + interface Record extends _sNOqeFa { } interface newRecord { /** @@ -12740,8 +13042,8 @@ namespace core { * BaseRecordProxy implements the [RecordProxy] interface and it is intended * to be used as embed to custom user provided Record proxy structs. */ - type _sAFoKTi = Record - interface BaseRecordProxy extends _sAFoKTi { + type _sDkTXmu = Record + interface BaseRecordProxy extends _sDkTXmu { } interface BaseRecordProxy { /** @@ -12990,8 +13292,8 @@ namespace core { /** * Settings defines the PocketBase app settings. */ - type _sIZwcSk = settings - interface Settings extends _sIZwcSk { + type _smVATJM = settings + interface Settings extends _smVATJM { } interface Settings { /** @@ -13265,8 +13567,8 @@ namespace core { * Audience specifies the auth group the rule should apply for: * ``` * - "" - both guests and authenticated users (default) - * - "guest" - only for guests - * - "auth" - only for authenticated users + * - "@guest" - only for guests + * - "@auth" - only for authenticated users * ``` */ audience: string @@ -13292,8 +13594,14 @@ namespace core { */ durationTime(): time.Duration } - type _sGILwVN = BaseModel - interface Param extends _sGILwVN { + interface RateLimitRule { + /** + * String returns a string representation of the rule. + */ + string(): string + } + type _sGnnYbN = BaseModel + interface Param extends _sGnnYbN { created: types.DateTime updated: types.DateTime value: types.JSONRaw @@ -13807,8 +14115,8 @@ namespace apis { */ (limitBytes: number): (hook.Handler) } - type _sIjOXdA = io.ReadCloser - interface limitedReader extends _sIjOXdA { + type _sSxgEvR = io.ReadCloser + interface limitedReader extends _sSxgEvR { } interface limitedReader { read(b: string|Array): number @@ -13959,8 +14267,8 @@ namespace apis { */ (config: GzipConfig): (hook.Handler) } - type _sprtZbI = http.ResponseWriter&io.Writer - interface gzipResponseWriter extends _sprtZbI { + type _sxWSoOP = http.ResponseWriter&io.Writer + interface gzipResponseWriter extends _sxWSoOP { } interface gzipResponseWriter { writeHeader(code: number): void @@ -13980,11 +14288,11 @@ namespace apis { interface gzipResponseWriter { unwrap(): http.ResponseWriter } - type _siaohKb = sync.RWMutex - interface rateLimiter extends _siaohKb { + type _sOCYJpf = sync.RWMutex + interface rateLimiter extends _sOCYJpf { } - type _srtOPhz = sync.Mutex - interface fixedWindow extends _srtOPhz { + type _sCaARGs = sync.Mutex + interface fixedWindow extends _sCaARGs { } interface realtimeSubscribeForm { clientId: string @@ -14108,6 +14416,10 @@ namespace apis { state: string code: string error: string + /** + * returned by Apple only + */ + appleUser: string } interface authWithOTPForm { otpId: string @@ -14225,8 +14537,8 @@ namespace pocketbase { * It implements [CoreApp] via embedding and all of the app interface methods * could be accessed directly through the instance (eg. PocketBase.DataDir()). */ - type _sGDsUzu = CoreApp - interface PocketBase extends _sGDsUzu { + type _sMiDWff = CoreApp + interface PocketBase extends _sMiDWff { /** * RootCmd is the main console command */ @@ -14311,111 +14623,6 @@ namespace pocketbase { } } -/** - * Package template is a thin wrapper around the standard html/template - * and text/template packages that implements a convenient registry to - * load and cache templates on the fly concurrently. - * - * It was created to assist the JSVM plugin HTML rendering, but could be used in other Go code. - * - * Example: - * - * ``` - * registry := template.NewRegistry() - * - * html1, err := registry.LoadFiles( - * // the files set wil be parsed only once and then cached - * "layout.html", - * "content.html", - * ).Render(map[string]any{"name": "John"}) - * - * html2, err := registry.LoadFiles( - * // reuse the already parsed and cached files set - * "layout.html", - * "content.html", - * ).Render(map[string]any{"name": "Jane"}) - * ``` - */ -namespace template { - interface newRegistry { - /** - * NewRegistry creates and initializes a new templates registry with - * some defaults (eg. global "raw" template function for unescaped HTML). - * - * Use the Registry.Load* methods to load templates into the registry. - */ - (): (Registry) - } - /** - * Registry defines a templates registry that is safe to be used by multiple goroutines. - * - * Use the Registry.Load* methods to load templates into the registry. - */ - interface Registry { - } - interface Registry { - /** - * AddFuncs registers new global template functions. - * - * The key of each map entry is the function name that will be used in the templates. - * If a function with the map entry name already exists it will be replaced with the new one. - * - * The value of each map entry is a function that must have either a - * single return value, or two return values of which the second has type error. - * - * Example: - * - * ``` - * r.AddFuncs(map[string]any{ - * "toUpper": func(str string) string { - * return strings.ToUppser(str) - * }, - * ... - * }) - * ``` - */ - addFuncs(funcs: _TygojaDict): (Registry) - } - interface Registry { - /** - * LoadFiles caches (if not already) the specified filenames set as a - * single template and returns a ready to use Renderer instance. - * - * There must be at least 1 filename specified. - */ - loadFiles(...filenames: string[]): (Renderer) - } - interface Registry { - /** - * LoadString caches (if not already) the specified inline string as a - * single template and returns a ready to use Renderer instance. - */ - loadString(text: string): (Renderer) - } - interface Registry { - /** - * LoadFS caches (if not already) the specified fs and globPatterns - * pair as single template and returns a ready to use Renderer instance. - * - * There must be at least 1 file matching the provided globPattern(s) - * (note that most file names serves as glob patterns matching themselves). - */ - loadFS(fsys: fs.FS, ...globPatterns: string[]): (Renderer) - } - /** - * Renderer defines a single parsed template. - */ - interface Renderer { - } - interface Renderer { - /** - * Render executes the template with the specified data as the dot object - * and returns the result as plain string. - */ - render(data: any): string - } -} - /** * Package sync provides basic synchronization primitives such as mutual * exclusion locks. Other than the [Once] and [WaitGroup] types, most are intended @@ -14425,6 +14632,8 @@ namespace template { * Values containing the types defined in this package should not be copied. */ namespace sync { + // @ts-ignore + import isync = sync /** * A Mutex is a mutual exclusion lock. * The zero value for a Mutex is an unlocked mutex. @@ -14483,6 +14692,8 @@ namespace sync { * the writer has acquired (and released) the lock, to ensure that * the lock eventually becomes available to the writer. * Note that this prohibits recursive read-locking. + * A [RWMutex.RLock] cannot be upgraded into a [RWMutex.Lock], + * nor can a [RWMutex.Lock] be downgraded into a [RWMutex.RLock]. * * In the terminology of [the Go memory model], * the n'th call to [RWMutex.Unlock] “synchronizes before” the m'th call to Lock @@ -14563,184 +14774,6 @@ namespace sync { } } -/** - * Package io provides basic interfaces to I/O primitives. - * Its primary job is to wrap existing implementations of such primitives, - * such as those in package os, into shared public interfaces that - * abstract the functionality, plus some other related primitives. - * - * Because these interfaces and primitives wrap lower-level operations with - * various implementations, unless otherwise informed clients should not - * assume they are safe for parallel execution. - */ -namespace io { - /** - * Reader is the interface that wraps the basic Read method. - * - * Read reads up to len(p) bytes into p. It returns the number of bytes - * read (0 <= n <= len(p)) and any error encountered. Even if Read - * returns n < len(p), it may use all of p as scratch space during the call. - * If some data is available but not len(p) bytes, Read conventionally - * returns what is available instead of waiting for more. - * - * When Read encounters an error or end-of-file condition after - * successfully reading n > 0 bytes, it returns the number of - * bytes read. It may return the (non-nil) error from the same call - * or return the error (and n == 0) from a subsequent call. - * An instance of this general case is that a Reader returning - * a non-zero number of bytes at the end of the input stream may - * return either err == EOF or err == nil. The next Read should - * return 0, EOF. - * - * Callers should always process the n > 0 bytes returned before - * considering the error err. Doing so correctly handles I/O errors - * that happen after reading some bytes and also both of the - * allowed EOF behaviors. - * - * If len(p) == 0, Read should always return n == 0. It may return a - * non-nil error if some error condition is known, such as EOF. - * - * Implementations of Read are discouraged from returning a - * zero byte count with a nil error, except when len(p) == 0. - * Callers should treat a return of 0 and nil as indicating that - * nothing happened; in particular it does not indicate EOF. - * - * Implementations must not retain p. - */ - interface Reader { - [key:string]: any; - read(p: string|Array): number - } - /** - * Writer is the interface that wraps the basic Write method. - * - * Write writes len(p) bytes from p to the underlying data stream. - * It returns the number of bytes written from p (0 <= n <= len(p)) - * and any error encountered that caused the write to stop early. - * Write must return a non-nil error if it returns n < len(p). - * Write must not modify the slice data, even temporarily. - * - * Implementations must not retain p. - */ - interface Writer { - [key:string]: any; - write(p: string|Array): number - } - /** - * ReadCloser is the interface that groups the basic Read and Close methods. - */ - interface ReadCloser { - [key:string]: any; - } - /** - * ReadSeekCloser is the interface that groups the basic Read, Seek and Close - * methods. - */ - interface ReadSeekCloser { - [key:string]: any; - } -} - -/** - * Package bytes implements functions for the manipulation of byte slices. - * It is analogous to the facilities of the [strings] package. - */ -namespace bytes { - /** - * A Reader implements the [io.Reader], [io.ReaderAt], [io.WriterTo], [io.Seeker], - * [io.ByteScanner], and [io.RuneScanner] interfaces by reading from - * a byte slice. - * Unlike a [Buffer], a Reader is read-only and supports seeking. - * The zero value for Reader operates like a Reader of an empty slice. - */ - interface Reader { - } - interface Reader { - /** - * Len returns the number of bytes of the unread portion of the - * slice. - */ - len(): number - } - interface Reader { - /** - * Size returns the original length of the underlying byte slice. - * Size is the number of bytes available for reading via [Reader.ReadAt]. - * The result is unaffected by any method calls except [Reader.Reset]. - */ - size(): number - } - interface Reader { - /** - * Read implements the [io.Reader] interface. - */ - read(b: string|Array): number - } - interface Reader { - /** - * ReadAt implements the [io.ReaderAt] interface. - */ - readAt(b: string|Array, off: number): number - } - interface Reader { - /** - * ReadByte implements the [io.ByteReader] interface. - */ - readByte(): number - } - interface Reader { - /** - * UnreadByte complements [Reader.ReadByte] in implementing the [io.ByteScanner] interface. - */ - unreadByte(): void - } - interface Reader { - /** - * ReadRune implements the [io.RuneReader] interface. - */ - readRune(): [number, number] - } - interface Reader { - /** - * UnreadRune complements [Reader.ReadRune] in implementing the [io.RuneScanner] interface. - */ - unreadRune(): void - } - interface Reader { - /** - * Seek implements the [io.Seeker] interface. - */ - seek(offset: number, whence: number): number - } - interface Reader { - /** - * WriteTo implements the [io.WriterTo] interface. - */ - writeTo(w: io.Writer): number - } - interface Reader { - /** - * Reset resets the [Reader] to be reading from b. - */ - reset(b: string|Array): void - } -} - -/** - * Package bufio implements buffered I/O. It wraps an io.Reader or io.Writer - * object, creating another object (Reader or Writer) that also implements - * the interface but provides buffering and some help for textual I/O. - */ -namespace bufio { - /** - * ReadWriter stores pointers to a [Reader] and a [Writer]. - * It implements [io.ReadWriter]. - */ - type _szuRqeY = Reader&Writer - interface ReadWriter extends _szuRqeY { - } -} - /** * Package syscall contains an interface to the low-level operating system * primitives. The details vary depending on the underlying system, and @@ -14765,6 +14798,8 @@ namespace bufio { * See https://golang.org/s/go1.4-syscall for more information. */ namespace syscall { + // @ts-ignore + import errpkg = errors interface SysProcAttr { chroot: string // Chroot. credential?: Credential // Credential. @@ -14948,8 +14983,8 @@ namespace syscall { * On some systems the monotonic clock will stop if the computer goes to sleep. * On such a system, t.Sub(u) may not accurately reflect the actual * time that passed between t and u. The same applies to other functions and - * methods that subtract times, such as [Since], [Until], [Before], [After], - * [Add], [Sub], [Equal] and [Compare]. In some cases, you may need to strip + * methods that subtract times, such as [Since], [Until], [Time.Before], [Time.After], + * [Time.Add], [Time.Equal] and [Time.Compare]. In some cases, you may need to strip * the monotonic clock to get accurate results. * * Because the monotonic clock reading has no meaning outside @@ -15051,9 +15086,9 @@ namespace time { * these methods does not change the actual instant it represents, only the time * zone in which to interpret it. * - * Representations of a Time value saved by the [Time.GobEncode], [Time.MarshalBinary], - * [Time.MarshalJSON], and [Time.MarshalText] methods store the [Time.Location]'s offset, but not - * the location name. They therefore lose information about Daylight Saving Time. + * Representations of a Time value saved by the [Time.GobEncode], [Time.MarshalBinary], [Time.AppendBinary], + * [Time.MarshalJSON], [Time.MarshalText] and [Time.AppendText] methods store the [Time.Location]'s offset, + * but not the location name. They therefore lose information about Daylight Saving Time. * * In addition to the required “wall clock” reading, a Time may contain an optional * reading of the current process's monotonic clock, to provide additional precision @@ -15072,6 +15107,13 @@ namespace time { */ interface Time { } + interface Time { + /** + * IsZero reports whether t represents the zero time instant, + * January 1, year 1, 00:00:00 UTC. + */ + isZero(): boolean + } interface Time { /** * After reports whether the time instant t is after u. @@ -15101,13 +15143,6 @@ namespace time { */ equal(u: Time): boolean } - interface Time { - /** - * IsZero reports whether t represents the zero time instant, - * January 1, year 1, 00:00:00 UTC. - */ - isZero(): boolean - } interface Time { /** * Date returns the year, month, and day in which t occurs. @@ -15257,7 +15292,8 @@ namespace time { interface Duration { /** * Abs returns the absolute value of d. - * As a special case, [math.MinInt64] is converted to [math.MaxInt64]. + * As a special case, Duration([math.MinInt64]) is converted to Duration([math.MaxInt64]), + * reducing its magnitude by 1 nanosecond. */ abs(): Duration } @@ -15387,13 +15423,19 @@ namespace time { } interface Time { /** - * MarshalBinary implements the encoding.BinaryMarshaler interface. + * AppendBinary implements the [encoding.BinaryAppender] interface. + */ + appendBinary(b: string|Array): string|Array + } + interface Time { + /** + * MarshalBinary implements the [encoding.BinaryMarshaler] interface. */ marshalBinary(): string|Array } interface Time { /** - * UnmarshalBinary implements the encoding.BinaryUnmarshaler interface. + * UnmarshalBinary implements the [encoding.BinaryUnmarshaler] interface. */ unmarshalBinary(data: string|Array): void } @@ -15411,7 +15453,7 @@ namespace time { } interface Time { /** - * MarshalJSON implements the [json.Marshaler] interface. + * MarshalJSON implements the [encoding/json.Marshaler] interface. * The time is a quoted string in the RFC 3339 format with sub-second precision. * If the timestamp cannot be represented as valid RFC 3339 * (e.g., the year is out of range), then an error is reported. @@ -15420,17 +15462,26 @@ namespace time { } interface Time { /** - * UnmarshalJSON implements the [json.Unmarshaler] interface. + * UnmarshalJSON implements the [encoding/json.Unmarshaler] interface. * The time must be a quoted string in the RFC 3339 format. */ unmarshalJSON(data: string|Array): void } interface Time { /** - * MarshalText implements the [encoding.TextMarshaler] interface. + * AppendText implements the [encoding.TextAppender] interface. * The time is formatted in RFC 3339 format with sub-second precision. * If the timestamp cannot be represented as valid RFC 3339 - * (e.g., the year is out of range), then an error is reported. + * (e.g., the year is out of range), then an error is returned. + */ + appendText(b: string|Array): string|Array + } + interface Time { + /** + * MarshalText implements the [encoding.TextMarshaler] interface. The output + * matches that of calling the [Time.AppendText] method. + * + * See [Time.AppendText] for more information. */ marshalText(): string|Array } @@ -15483,30 +15534,33 @@ namespace time { * calls to servers should accept a Context. The chain of function * calls between them must propagate the Context, optionally replacing * it with a derived Context created using [WithCancel], [WithDeadline], - * [WithTimeout], or [WithValue]. When a Context is canceled, all - * Contexts derived from it are also canceled. + * [WithTimeout], or [WithValue]. + * + * A Context may be canceled to indicate that work done on its behalf should stop. + * A Context with a deadline is canceled after the deadline passes. + * When a Context is canceled, all Contexts derived from it are also canceled. * * The [WithCancel], [WithDeadline], and [WithTimeout] functions take a * Context (the parent) and return a derived Context (the child) and a - * [CancelFunc]. Calling the CancelFunc cancels the child and its + * [CancelFunc]. Calling the CancelFunc directly cancels the child and its * children, removes the parent's reference to the child, and stops * any associated timers. Failing to call the CancelFunc leaks the - * child and its children until the parent is canceled or the timer - * fires. The go vet tool checks that CancelFuncs are used on all - * control-flow paths. + * child and its children until the parent is canceled. The go vet tool + * checks that CancelFuncs are used on all control-flow paths. * - * The [WithCancelCause] function returns a [CancelCauseFunc], which - * takes an error and records it as the cancellation cause. Calling - * [Cause] on the canceled context or any of its children retrieves - * the cause. If no cause is specified, Cause(ctx) returns the same - * value as ctx.Err(). + * The [WithCancelCause], [WithDeadlineCause], and [WithTimeoutCause] functions + * return a [CancelCauseFunc], which takes an error and records it as + * the cancellation cause. Calling [Cause] on the canceled context + * or any of its children retrieves the cause. If no cause is specified, + * Cause(ctx) returns the same value as ctx.Err(). * * Programs that use Contexts should follow these rules to keep interfaces * consistent across packages and enable static analysis tools to check context * propagation: * * Do not store Contexts inside a struct type; instead, pass a Context - * explicitly to each function that needs it. The Context should be the first + * explicitly to each function that needs it. This is discussed further in + * https://go.dev/blog/context-and-structs. The Context should be the first * parameter, typically named ctx: * * ``` @@ -15524,7 +15578,7 @@ namespace time { * The same Context may be passed to functions running in different goroutines; * Contexts are safe for simultaneous use by multiple goroutines. * - * See https://blog.golang.org/context for example code for a server that uses + * See https://go.dev/blog/context for example code for a server that uses * Contexts. */ namespace context { @@ -15579,8 +15633,8 @@ namespace context { /** * If Done is not yet closed, Err returns nil. * If Done is closed, Err returns a non-nil error explaining why: - * Canceled if the context was canceled - * or DeadlineExceeded if the context's deadline passed. + * DeadlineExceeded if the context's deadline passed, + * or Canceled if the context was canceled for some other reason. * After Err returns a non-nil error, successive calls to Err return the same error. */ err(): void @@ -15638,38 +15692,202 @@ namespace context { } /** - * Package fs defines basic interfaces to a file system. - * A file system can be provided by the host operating system - * but also by other packages. + * Package io provides basic interfaces to I/O primitives. + * Its primary job is to wrap existing implementations of such primitives, + * such as those in package os, into shared public interfaces that + * abstract the functionality, plus some other related primitives. * - * See the [testing/fstest] package for support with testing - * implementations of file systems. + * Because these interfaces and primitives wrap lower-level operations with + * various implementations, unless otherwise informed clients should not + * assume they are safe for parallel execution. */ -namespace fs { +namespace io { /** - * An FS provides access to a hierarchical file system. + * Reader is the interface that wraps the basic Read method. * - * The FS interface is the minimum implementation required of the file system. - * A file system may implement additional interfaces, - * such as [ReadFileFS], to provide additional or optimized functionality. + * Read reads up to len(p) bytes into p. It returns the number of bytes + * read (0 <= n <= len(p)) and any error encountered. Even if Read + * returns n < len(p), it may use all of p as scratch space during the call. + * If some data is available but not len(p) bytes, Read conventionally + * returns what is available instead of waiting for more. * - * [testing/fstest.TestFS] may be used to test implementations of an FS for - * correctness. + * When Read encounters an error or end-of-file condition after + * successfully reading n > 0 bytes, it returns the number of + * bytes read. It may return the (non-nil) error from the same call + * or return the error (and n == 0) from a subsequent call. + * An instance of this general case is that a Reader returning + * a non-zero number of bytes at the end of the input stream may + * return either err == EOF or err == nil. The next Read should + * return 0, EOF. + * + * Callers should always process the n > 0 bytes returned before + * considering the error err. Doing so correctly handles I/O errors + * that happen after reading some bytes and also both of the + * allowed EOF behaviors. + * + * If len(p) == 0, Read should always return n == 0. It may return a + * non-nil error if some error condition is known, such as EOF. + * + * Implementations of Read are discouraged from returning a + * zero byte count with a nil error, except when len(p) == 0. + * Callers should treat a return of 0 and nil as indicating that + * nothing happened; in particular it does not indicate EOF. + * + * Implementations must not retain p. */ - interface FS { + interface Reader { [key:string]: any; - /** - * Open opens the named file. - * - * When Open returns an error, it should be of type *PathError - * with the Op field set to "open", the Path field set to name, - * and the Err field describing the problem. - * - * Open should reject attempts to open names that do not satisfy - * ValidPath(name), returning a *PathError with Err set to - * ErrInvalid or ErrNotExist. - */ - open(name: string): File + read(p: string|Array): number + } + /** + * Writer is the interface that wraps the basic Write method. + * + * Write writes len(p) bytes from p to the underlying data stream. + * It returns the number of bytes written from p (0 <= n <= len(p)) + * and any error encountered that caused the write to stop early. + * Write must return a non-nil error if it returns n < len(p). + * Write must not modify the slice data, even temporarily. + * + * Implementations must not retain p. + */ + interface Writer { + [key:string]: any; + write(p: string|Array): number + } + /** + * ReadCloser is the interface that groups the basic Read and Close methods. + */ + interface ReadCloser { + [key:string]: any; + } + /** + * ReadSeekCloser is the interface that groups the basic Read, Seek and Close + * methods. + */ + interface ReadSeekCloser { + [key:string]: any; + } +} + +/** + * Package bytes implements functions for the manipulation of byte slices. + * It is analogous to the facilities of the [strings] package. + */ +namespace bytes { + /** + * A Reader implements the [io.Reader], [io.ReaderAt], [io.WriterTo], [io.Seeker], + * [io.ByteScanner], and [io.RuneScanner] interfaces by reading from + * a byte slice. + * Unlike a [Buffer], a Reader is read-only and supports seeking. + * The zero value for Reader operates like a Reader of an empty slice. + */ + interface Reader { + } + interface Reader { + /** + * Len returns the number of bytes of the unread portion of the + * slice. + */ + len(): number + } + interface Reader { + /** + * Size returns the original length of the underlying byte slice. + * Size is the number of bytes available for reading via [Reader.ReadAt]. + * The result is unaffected by any method calls except [Reader.Reset]. + */ + size(): number + } + interface Reader { + /** + * Read implements the [io.Reader] interface. + */ + read(b: string|Array): number + } + interface Reader { + /** + * ReadAt implements the [io.ReaderAt] interface. + */ + readAt(b: string|Array, off: number): number + } + interface Reader { + /** + * ReadByte implements the [io.ByteReader] interface. + */ + readByte(): number + } + interface Reader { + /** + * UnreadByte complements [Reader.ReadByte] in implementing the [io.ByteScanner] interface. + */ + unreadByte(): void + } + interface Reader { + /** + * ReadRune implements the [io.RuneReader] interface. + */ + readRune(): [number, number] + } + interface Reader { + /** + * UnreadRune complements [Reader.ReadRune] in implementing the [io.RuneScanner] interface. + */ + unreadRune(): void + } + interface Reader { + /** + * Seek implements the [io.Seeker] interface. + */ + seek(offset: number, whence: number): number + } + interface Reader { + /** + * WriteTo implements the [io.WriterTo] interface. + */ + writeTo(w: io.Writer): number + } + interface Reader { + /** + * Reset resets the [Reader] to be reading from b. + */ + reset(b: string|Array): void + } +} + +/** + * Package fs defines basic interfaces to a file system. + * A file system can be provided by the host operating system + * but also by other packages. + * + * See the [testing/fstest] package for support with testing + * implementations of file systems. + */ +namespace fs { + /** + * An FS provides access to a hierarchical file system. + * + * The FS interface is the minimum implementation required of the file system. + * A file system may implement additional interfaces, + * such as [ReadFileFS], to provide additional or optimized functionality. + * + * [testing/fstest.TestFS] may be used to test implementations of an FS for + * correctness. + */ + interface FS { + [key:string]: any; + /** + * Open opens the named file. + * [File.Close] must be called to release any associated resources. + * + * When Open returns an error, it should be of type *PathError + * with the Op field set to "open", the Path field set to name, + * and the Err field describing the problem. + * + * Open should reject attempts to open names that do not satisfy + * ValidPath(name), returning a *PathError with Err set to + * ErrInvalid or ErrNotExist. + */ + open(name: string): File } /** * A File provides access to a single file. @@ -15838,3253 +16056,2985 @@ namespace fs { } /** - * Package sql provides a generic interface around SQL (or SQL-like) - * databases. - * - * The sql package must be used in conjunction with a database driver. - * See https://golang.org/s/sqldrivers for a list of drivers. - * - * Drivers that do not support context cancellation will not return until - * after the query is completed. - * - * For usage examples, see the wiki page at - * https://golang.org/s/sqlwiki. + * Package bufio implements buffered I/O. It wraps an io.Reader or io.Writer + * object, creating another object (Reader or Writer) that also implements + * the interface but provides buffering and some help for textual I/O. */ -namespace sql { - /** - * TxOptions holds the transaction options to be used in [DB.BeginTx]. - */ - interface TxOptions { - /** - * Isolation is the transaction isolation level. - * If zero, the driver or database's default level is used. - */ - isolation: IsolationLevel - readOnly: boolean - } - /** - * NullString represents a string that may be null. - * NullString implements the [Scanner] interface so - * it can be used as a scan destination: - * - * ``` - * var s NullString - * err := db.QueryRow("SELECT name FROM foo WHERE id=?", id).Scan(&s) - * ... - * if s.Valid { - * // use s.String - * } else { - * // NULL value - * } - * ``` - */ - interface NullString { - string: string - valid: boolean // Valid is true if String is not NULL - } - interface NullString { - /** - * Scan implements the [Scanner] interface. - */ - scan(value: any): void - } - interface NullString { - /** - * Value implements the [driver.Valuer] interface. - */ - value(): any - } +namespace bufio { /** - * DB is a database handle representing a pool of zero or more - * underlying connections. It's safe for concurrent use by multiple - * goroutines. - * - * The sql package creates and frees connections automatically; it - * also maintains a free pool of idle connections. If the database has - * a concept of per-connection state, such state can be reliably observed - * within a transaction ([Tx]) or connection ([Conn]). Once [DB.Begin] is called, the - * returned [Tx] is bound to a single connection. Once [Tx.Commit] or - * [Tx.Rollback] is called on the transaction, that transaction's - * connection is returned to [DB]'s idle connection pool. The pool size - * can be controlled with [DB.SetMaxIdleConns]. + * ReadWriter stores pointers to a [Reader] and a [Writer]. + * It implements [io.ReadWriter]. */ - interface DB { + type _sxPXlDK = Reader&Writer + interface ReadWriter extends _sxPXlDK { } - interface DB { - /** - * PingContext verifies a connection to the database is still alive, - * establishing a connection if necessary. - */ - pingContext(ctx: context.Context): void - } - interface DB { - /** - * Ping verifies a connection to the database is still alive, - * establishing a connection if necessary. - * - * Ping uses [context.Background] internally; to specify the context, use - * [DB.PingContext]. - */ - ping(): void - } - interface DB { - /** - * Close closes the database and prevents new queries from starting. - * Close then waits for all queries that have started processing on the server - * to finish. - * - * It is rare to Close a [DB], as the [DB] handle is meant to be - * long-lived and shared between many goroutines. - */ - close(): void - } - interface DB { - /** - * SetMaxIdleConns sets the maximum number of connections in the idle - * connection pool. - * - * If MaxOpenConns is greater than 0 but less than the new MaxIdleConns, - * then the new MaxIdleConns will be reduced to match the MaxOpenConns limit. - * - * If n <= 0, no idle connections are retained. - * - * The default max idle connections is currently 2. This may change in - * a future release. +} + +/** + * Package net provides a portable interface for network I/O, including + * TCP/IP, UDP, domain name resolution, and Unix domain sockets. + * + * Although the package provides access to low-level networking + * primitives, most clients will need only the basic interface provided + * by the [Dial], [Listen], and Accept functions and the associated + * [Conn] and [Listener] interfaces. The crypto/tls package uses + * the same interfaces and similar Dial and Listen functions. + * + * The Dial function connects to a server: + * + * ``` + * conn, err := net.Dial("tcp", "golang.org:80") + * if err != nil { + * // handle error + * } + * fmt.Fprintf(conn, "GET / HTTP/1.0\r\n\r\n") + * status, err := bufio.NewReader(conn).ReadString('\n') + * // ... + * ``` + * + * The Listen function creates servers: + * + * ``` + * ln, err := net.Listen("tcp", ":8080") + * if err != nil { + * // handle error + * } + * for { + * conn, err := ln.Accept() + * if err != nil { + * // handle error + * } + * go handleConnection(conn) + * } + * ``` + * + * # Name Resolution + * + * The method for resolving domain names, whether indirectly with functions like Dial + * or directly with functions like [LookupHost] and [LookupAddr], varies by operating system. + * + * On Unix systems, the resolver has two options for resolving names. + * It can use a pure Go resolver that sends DNS requests directly to the servers + * listed in /etc/resolv.conf, or it can use a cgo-based resolver that calls C + * library routines such as getaddrinfo and getnameinfo. + * + * On Unix the pure Go resolver is preferred over the cgo resolver, because a blocked DNS + * request consumes only a goroutine, while a blocked C call consumes an operating system thread. + * When cgo is available, the cgo-based resolver is used instead under a variety of + * conditions: on systems that do not let programs make direct DNS requests (OS X), + * when the LOCALDOMAIN environment variable is present (even if empty), + * when the RES_OPTIONS or HOSTALIASES environment variable is non-empty, + * when the ASR_CONFIG environment variable is non-empty (OpenBSD only), + * when /etc/resolv.conf or /etc/nsswitch.conf specify the use of features that the + * Go resolver does not implement. + * + * On all systems (except Plan 9), when the cgo resolver is being used + * this package applies a concurrent cgo lookup limit to prevent the system + * from running out of system threads. Currently, it is limited to 500 concurrent lookups. + * + * The resolver decision can be overridden by setting the netdns value of the + * GODEBUG environment variable (see package runtime) to go or cgo, as in: + * + * ``` + * export GODEBUG=netdns=go # force pure Go resolver + * export GODEBUG=netdns=cgo # force native resolver (cgo, win32) + * ``` + * + * The decision can also be forced while building the Go source tree + * by setting the netgo or netcgo build tag. + * The netgo build tag disables entirely the use of the native (CGO) resolver, + * meaning the Go resolver is the only one that can be used. + * With the netcgo build tag the native and the pure Go resolver are compiled into the binary, + * but the native (CGO) resolver is preferred over the Go resolver. + * With netcgo, the Go resolver can still be forced at runtime with GODEBUG=netdns=go. + * + * A numeric netdns setting, as in GODEBUG=netdns=1, causes the resolver + * to print debugging information about its decisions. + * To force a particular resolver while also printing debugging information, + * join the two settings by a plus sign, as in GODEBUG=netdns=go+1. + * + * The Go resolver will send an EDNS0 additional header with a DNS request, + * to signal a willingness to accept a larger DNS packet size. + * This can reportedly cause sporadic failures with the DNS server run + * by some modems and routers. Setting GODEBUG=netedns0=0 will disable + * sending the additional header. + * + * On macOS, if Go code that uses the net package is built with + * -buildmode=c-archive, linking the resulting archive into a C program + * requires passing -lresolv when linking the C code. + * + * On Plan 9, the resolver always accesses /net/cs and /net/dns. + * + * On Windows, in Go 1.18.x and earlier, the resolver always used C + * library functions, such as GetAddrInfo and DnsQuery. + */ +namespace net { + /** + * Conn is a generic stream-oriented network connection. + * + * Multiple goroutines may invoke methods on a Conn simultaneously. + */ + interface Conn { + [key:string]: any; + /** + * Read reads data from the connection. + * Read can be made to time out and return an error after a fixed + * time limit; see SetDeadline and SetReadDeadline. */ - setMaxIdleConns(n: number): void - } - interface DB { + read(b: string|Array): number /** - * SetMaxOpenConns sets the maximum number of open connections to the database. - * - * If MaxIdleConns is greater than 0 and the new MaxOpenConns is less than - * MaxIdleConns, then MaxIdleConns will be reduced to match the new - * MaxOpenConns limit. - * - * If n <= 0, then there is no limit on the number of open connections. - * The default is 0 (unlimited). + * Write writes data to the connection. + * Write can be made to time out and return an error after a fixed + * time limit; see SetDeadline and SetWriteDeadline. */ - setMaxOpenConns(n: number): void - } - interface DB { + write(b: string|Array): number /** - * SetConnMaxLifetime sets the maximum amount of time a connection may be reused. - * - * Expired connections may be closed lazily before reuse. - * - * If d <= 0, connections are not closed due to a connection's age. + * Close closes the connection. + * Any blocked Read or Write operations will be unblocked and return errors. */ - setConnMaxLifetime(d: time.Duration): void - } - interface DB { + close(): void /** - * SetConnMaxIdleTime sets the maximum amount of time a connection may be idle. - * - * Expired connections may be closed lazily before reuse. - * - * If d <= 0, connections are not closed due to a connection's idle time. + * LocalAddr returns the local network address, if known. */ - setConnMaxIdleTime(d: time.Duration): void - } - interface DB { + localAddr(): Addr /** - * Stats returns database statistics. + * RemoteAddr returns the remote network address, if known. */ - stats(): DBStats - } - interface DB { + remoteAddr(): Addr /** - * PrepareContext creates a prepared statement for later queries or executions. - * Multiple queries or executions may be run concurrently from the - * returned statement. - * The caller must call the statement's [*Stmt.Close] method - * when the statement is no longer needed. + * SetDeadline sets the read and write deadlines associated + * with the connection. It is equivalent to calling both + * SetReadDeadline and SetWriteDeadline. * - * The provided context is used for the preparation of the statement, not for the - * execution of the statement. + * A deadline is an absolute time after which I/O operations + * fail instead of blocking. The deadline applies to all future + * and pending I/O, not just the immediately following call to + * Read or Write. After a deadline has been exceeded, the + * connection can be refreshed by setting a deadline in the future. + * + * If the deadline is exceeded a call to Read or Write or to other + * I/O methods will return an error that wraps os.ErrDeadlineExceeded. + * This can be tested using errors.Is(err, os.ErrDeadlineExceeded). + * The error's Timeout method will return true, but note that there + * are other possible errors for which the Timeout method will + * return true even if the deadline has not been exceeded. + * + * An idle timeout can be implemented by repeatedly extending + * the deadline after successful Read or Write calls. + * + * A zero value for t means I/O operations will not time out. */ - prepareContext(ctx: context.Context, query: string): (Stmt) - } - interface DB { + setDeadline(t: time.Time): void /** - * Prepare creates a prepared statement for later queries or executions. - * Multiple queries or executions may be run concurrently from the - * returned statement. - * The caller must call the statement's [*Stmt.Close] method - * when the statement is no longer needed. - * - * Prepare uses [context.Background] internally; to specify the context, use - * [DB.PrepareContext]. + * SetReadDeadline sets the deadline for future Read calls + * and any currently-blocked Read call. + * A zero value for t means Read will not time out. */ - prepare(query: string): (Stmt) - } - interface DB { + setReadDeadline(t: time.Time): void /** - * ExecContext executes a query without returning any rows. - * The args are for any placeholder parameters in the query. + * SetWriteDeadline sets the deadline for future Write calls + * and any currently-blocked Write call. + * Even if write times out, it may return n > 0, indicating that + * some of the data was successfully written. + * A zero value for t means Write will not time out. */ - execContext(ctx: context.Context, query: string, ...args: any[]): Result + setWriteDeadline(t: time.Time): void } - interface DB { + /** + * A Listener is a generic network listener for stream-oriented protocols. + * + * Multiple goroutines may invoke methods on a Listener simultaneously. + */ + interface Listener { + [key:string]: any; /** - * Exec executes a query without returning any rows. - * The args are for any placeholder parameters in the query. - * - * Exec uses [context.Background] internally; to specify the context, use - * [DB.ExecContext]. + * Accept waits for and returns the next connection to the listener. */ - exec(query: string, ...args: any[]): Result - } - interface DB { + accept(): Conn /** - * QueryContext executes a query that returns rows, typically a SELECT. - * The args are for any placeholder parameters in the query. + * Close closes the listener. + * Any blocked Accept operations will be unblocked and return errors. */ - queryContext(ctx: context.Context, query: string, ...args: any[]): (Rows) - } - interface DB { + close(): void /** - * Query executes a query that returns rows, typically a SELECT. - * The args are for any placeholder parameters in the query. - * - * Query uses [context.Background] internally; to specify the context, use - * [DB.QueryContext]. + * Addr returns the listener's network address. */ - query(query: string, ...args: any[]): (Rows) + addr(): Addr } - interface DB { - /** - * QueryRowContext executes a query that is expected to return at most one row. - * QueryRowContext always returns a non-nil value. Errors are deferred until - * [Row]'s Scan method is called. - * If the query selects no rows, the [*Row.Scan] will return [ErrNoRows]. - * Otherwise, [*Row.Scan] scans the first selected row and discards - * the rest. - */ - queryRowContext(ctx: context.Context, query: string, ...args: any[]): (Row) - } - interface DB { - /** - * QueryRow executes a query that is expected to return at most one row. - * QueryRow always returns a non-nil value. Errors are deferred until - * [Row]'s Scan method is called. - * If the query selects no rows, the [*Row.Scan] will return [ErrNoRows]. - * Otherwise, [*Row.Scan] scans the first selected row and discards - * the rest. - * - * QueryRow uses [context.Background] internally; to specify the context, use - * [DB.QueryRowContext]. - */ - queryRow(query: string, ...args: any[]): (Row) - } - interface DB { - /** - * BeginTx starts a transaction. - * - * The provided context is used until the transaction is committed or rolled back. - * If the context is canceled, the sql package will roll back - * the transaction. [Tx.Commit] will return an error if the context provided to - * BeginTx is canceled. - * - * The provided [TxOptions] is optional and may be nil if defaults should be used. - * If a non-default isolation level is used that the driver doesn't support, - * an error will be returned. - */ - beginTx(ctx: context.Context, opts: TxOptions): (Tx) +} + +/** + * Package multipart implements MIME multipart parsing, as defined in RFC + * 2046. + * + * The implementation is sufficient for HTTP (RFC 2388) and the multipart + * bodies generated by popular browsers. + * + * # Limits + * + * To protect against malicious inputs, this package sets limits on the size + * of the MIME data it processes. + * + * [Reader.NextPart] and [Reader.NextRawPart] limit the number of headers in a + * part to 10000 and [Reader.ReadForm] limits the total number of headers in all + * FileHeaders to 10000. + * These limits may be adjusted with the GODEBUG=multipartmaxheaders= + * setting. + * + * Reader.ReadForm further limits the number of parts in a form to 1000. + * This limit may be adjusted with the GODEBUG=multipartmaxparts= + * setting. + */ +namespace multipart { + /** + * A FileHeader describes a file part of a multipart request. + */ + interface FileHeader { + filename: string + header: textproto.MIMEHeader + size: number } - interface DB { + interface FileHeader { /** - * Begin starts a transaction. The default isolation level is dependent on - * the driver. - * - * Begin uses [context.Background] internally; to specify the context, use - * [DB.BeginTx]. + * Open opens and returns the [FileHeader]'s associated File. */ - begin(): (Tx) + open(): File } - interface DB { +} + +/** + * Package http provides HTTP client and server implementations. + * + * [Get], [Head], [Post], and [PostForm] make HTTP (or HTTPS) requests: + * + * ``` + * resp, err := http.Get("http://example.com/") + * ... + * resp, err := http.Post("http://example.com/upload", "image/jpeg", &buf) + * ... + * resp, err := http.PostForm("http://example.com/form", + * url.Values{"key": {"Value"}, "id": {"123"}}) + * ``` + * + * The caller must close the response body when finished with it: + * + * ``` + * resp, err := http.Get("http://example.com/") + * if err != nil { + * // handle error + * } + * defer resp.Body.Close() + * body, err := io.ReadAll(resp.Body) + * // ... + * ``` + * + * # Clients and Transports + * + * For control over HTTP client headers, redirect policy, and other + * settings, create a [Client]: + * + * ``` + * client := &http.Client{ + * CheckRedirect: redirectPolicyFunc, + * } + * + * resp, err := client.Get("http://example.com") + * // ... + * + * req, err := http.NewRequest("GET", "http://example.com", nil) + * // ... + * req.Header.Add("If-None-Match", `W/"wyzzy"`) + * resp, err := client.Do(req) + * // ... + * ``` + * + * For control over proxies, TLS configuration, keep-alives, + * compression, and other settings, create a [Transport]: + * + * ``` + * tr := &http.Transport{ + * MaxIdleConns: 10, + * IdleConnTimeout: 30 * time.Second, + * DisableCompression: true, + * } + * client := &http.Client{Transport: tr} + * resp, err := client.Get("https://example.com") + * ``` + * + * Clients and Transports are safe for concurrent use by multiple + * goroutines and for efficiency should only be created once and re-used. + * + * # Servers + * + * ListenAndServe starts an HTTP server with a given address and handler. + * The handler is usually nil, which means to use [DefaultServeMux]. + * [Handle] and [HandleFunc] add handlers to [DefaultServeMux]: + * + * ``` + * http.Handle("/foo", fooHandler) + * + * http.HandleFunc("/bar", func(w http.ResponseWriter, r *http.Request) { + * fmt.Fprintf(w, "Hello, %q", html.EscapeString(r.URL.Path)) + * }) + * + * log.Fatal(http.ListenAndServe(":8080", nil)) + * ``` + * + * More control over the server's behavior is available by creating a + * custom Server: + * + * ``` + * s := &http.Server{ + * Addr: ":8080", + * Handler: myHandler, + * ReadTimeout: 10 * time.Second, + * WriteTimeout: 10 * time.Second, + * MaxHeaderBytes: 1 << 20, + * } + * log.Fatal(s.ListenAndServe()) + * ``` + * + * # HTTP/2 + * + * Starting with Go 1.6, the http package has transparent support for the + * HTTP/2 protocol when using HTTPS. Programs that must disable HTTP/2 + * can do so by setting [Transport.TLSNextProto] (for clients) or + * [Server.TLSNextProto] (for servers) to a non-nil, empty + * map. Alternatively, the following GODEBUG settings are + * currently supported: + * + * ``` + * GODEBUG=http2client=0 # disable HTTP/2 client support + * GODEBUG=http2server=0 # disable HTTP/2 server support + * GODEBUG=http2debug=1 # enable verbose HTTP/2 debug logs + * GODEBUG=http2debug=2 # ... even more verbose, with frame dumps + * ``` + * + * Please report any issues before disabling HTTP/2 support: https://golang.org/s/http2bug + * + * The http package's [Transport] and [Server] both automatically enable + * HTTP/2 support for simple configurations. To enable HTTP/2 for more + * complex configurations, to use lower-level HTTP/2 features, or to use + * a newer version of Go's http2 package, import "golang.org/x/net/http2" + * directly and use its ConfigureTransport and/or ConfigureServer + * functions. Manually configuring HTTP/2 via the golang.org/x/net/http2 + * package takes precedence over the net/http package's built-in HTTP/2 + * support. + */ +namespace http { + // @ts-ignore + import mathrand = rand + /** + * PushOptions describes options for [Pusher.Push]. + */ + interface PushOptions { /** - * Driver returns the database's underlying driver. + * Method specifies the HTTP method for the promised request. + * If set, it must be "GET" or "HEAD". Empty means "GET". */ - driver(): any - } - interface DB { + method: string /** - * Conn returns a single connection by either opening a new connection - * or returning an existing connection from the connection pool. Conn will - * block until either a connection is returned or ctx is canceled. - * Queries run on the same Conn will be run in the same database session. - * - * Every Conn must be returned to the database pool after use by - * calling [Conn.Close]. + * Header specifies additional promised request headers. This cannot + * include HTTP/2 pseudo header fields like ":path" and ":scheme", + * which will be added automatically. */ - conn(ctx: context.Context): (Conn) + header: Header } + // @ts-ignore + import urlpkg = url /** - * Tx is an in-progress database transaction. - * - * A transaction must end with a call to [Tx.Commit] or [Tx.Rollback]. - * - * After a call to [Tx.Commit] or [Tx.Rollback], all operations on the - * transaction fail with [ErrTxDone]. + * A Request represents an HTTP request received by a server + * or to be sent by a client. * - * The statements prepared for a transaction by calling - * the transaction's [Tx.Prepare] or [Tx.Stmt] methods are closed - * by the call to [Tx.Commit] or [Tx.Rollback]. + * The field semantics differ slightly between client and server + * usage. In addition to the notes on the fields below, see the + * documentation for [Request.Write] and [RoundTripper]. */ - interface Tx { - } - interface Tx { - /** - * Commit commits the transaction. - */ - commit(): void - } - interface Tx { + interface Request { /** - * Rollback aborts the transaction. + * Method specifies the HTTP method (GET, POST, PUT, etc.). + * For client requests, an empty string means GET. */ - rollback(): void - } - interface Tx { + method: string /** - * PrepareContext creates a prepared statement for use within a transaction. - * - * The returned statement operates within the transaction and will be closed - * when the transaction has been committed or rolled back. + * URL specifies either the URI being requested (for server + * requests) or the URL to access (for client requests). * - * To use an existing prepared statement on this transaction, see [Tx.Stmt]. + * For server requests, the URL is parsed from the URI + * supplied on the Request-Line as stored in RequestURI. For + * most requests, fields other than Path and RawQuery will be + * empty. (See RFC 7230, Section 5.3) * - * The provided context will be used for the preparation of the context, not - * for the execution of the returned statement. The returned statement - * will run in the transaction context. + * For client requests, the URL's Host specifies the server to + * connect to, while the Request's Host field optionally + * specifies the Host header value to send in the HTTP + * request. */ - prepareContext(ctx: context.Context, query: string): (Stmt) - } - interface Tx { + url?: url.URL /** - * Prepare creates a prepared statement for use within a transaction. - * - * The returned statement operates within the transaction and will be closed - * when the transaction has been committed or rolled back. - * - * To use an existing prepared statement on this transaction, see [Tx.Stmt]. + * The protocol version for incoming server requests. * - * Prepare uses [context.Background] internally; to specify the context, use - * [Tx.PrepareContext]. + * For client requests, these fields are ignored. The HTTP + * client code always uses either HTTP/1.1 or HTTP/2. + * See the docs on Transport for details. */ - prepare(query: string): (Stmt) - } - interface Tx { + proto: string // "HTTP/1.0" + protoMajor: number // 1 + protoMinor: number // 0 /** - * StmtContext returns a transaction-specific prepared statement from - * an existing statement. + * Header contains the request header fields either received + * by the server or to be sent by the client. * - * Example: + * If a server received a request with header lines, * * ``` - * updateMoney, err := db.Prepare("UPDATE balance SET money=money+? WHERE id=?") - * ... - * tx, err := db.Begin() - * ... - * res, err := tx.StmtContext(ctx, updateMoney).Exec(123.45, 98293203) + * Host: example.com + * accept-encoding: gzip, deflate + * Accept-Language: en-us + * fOO: Bar + * foo: two * ``` * - * The provided context is used for the preparation of the statement, not for the - * execution of the statement. - * - * The returned statement operates within the transaction and will be closed - * when the transaction has been committed or rolled back. - */ - stmtContext(ctx: context.Context, stmt: Stmt): (Stmt) - } - interface Tx { - /** - * Stmt returns a transaction-specific prepared statement from - * an existing statement. - * - * Example: + * then * * ``` - * updateMoney, err := db.Prepare("UPDATE balance SET money=money+? WHERE id=?") - * ... - * tx, err := db.Begin() - * ... - * res, err := tx.Stmt(updateMoney).Exec(123.45, 98293203) + * Header = map[string][]string{ + * "Accept-Encoding": {"gzip, deflate"}, + * "Accept-Language": {"en-us"}, + * "Foo": {"Bar", "two"}, + * } * ``` * - * The returned statement operates within the transaction and will be closed - * when the transaction has been committed or rolled back. + * For incoming requests, the Host header is promoted to the + * Request.Host field and removed from the Header map. * - * Stmt uses [context.Background] internally; to specify the context, use - * [Tx.StmtContext]. + * HTTP defines that header names are case-insensitive. The + * request parser implements this by using CanonicalHeaderKey, + * making the first character and any characters following a + * hyphen uppercase and the rest lowercase. + * + * For client requests, certain headers such as Content-Length + * and Connection are automatically written when needed and + * values in Header may be ignored. See the documentation + * for the Request.Write method. */ - stmt(stmt: Stmt): (Stmt) - } - interface Tx { + header: Header /** - * ExecContext executes a query that doesn't return rows. - * For example: an INSERT and UPDATE. + * Body is the request's body. + * + * For client requests, a nil body means the request has no + * body, such as a GET request. The HTTP Client's Transport + * is responsible for calling the Close method. + * + * For server requests, the Request Body is always non-nil + * but will return EOF immediately when no body is present. + * The Server will close the request body. The ServeHTTP + * Handler does not need to. + * + * Body must allow Read to be called concurrently with Close. + * In particular, calling Close should unblock a Read waiting + * for input. */ - execContext(ctx: context.Context, query: string, ...args: any[]): Result - } - interface Tx { + body: io.ReadCloser /** - * Exec executes a query that doesn't return rows. - * For example: an INSERT and UPDATE. + * GetBody defines an optional func to return a new copy of + * Body. It is used for client requests when a redirect requires + * reading the body more than once. Use of GetBody still + * requires setting Body. * - * Exec uses [context.Background] internally; to specify the context, use - * [Tx.ExecContext]. + * For server requests, it is unused. */ - exec(query: string, ...args: any[]): Result - } - interface Tx { + getBody: () => io.ReadCloser /** - * QueryContext executes a query that returns rows, typically a SELECT. + * ContentLength records the length of the associated content. + * The value -1 indicates that the length is unknown. + * Values >= 0 indicate that the given number of bytes may + * be read from Body. + * + * For client requests, a value of 0 with a non-nil Body is + * also treated as unknown. */ - queryContext(ctx: context.Context, query: string, ...args: any[]): (Rows) - } - interface Tx { + contentLength: number /** - * Query executes a query that returns rows, typically a SELECT. - * - * Query uses [context.Background] internally; to specify the context, use - * [Tx.QueryContext]. + * TransferEncoding lists the transfer encodings from outermost to + * innermost. An empty list denotes the "identity" encoding. + * TransferEncoding can usually be ignored; chunked encoding is + * automatically added and removed as necessary when sending and + * receiving requests. */ - query(query: string, ...args: any[]): (Rows) - } - interface Tx { + transferEncoding: Array /** - * QueryRowContext executes a query that is expected to return at most one row. - * QueryRowContext always returns a non-nil value. Errors are deferred until - * [Row]'s Scan method is called. - * If the query selects no rows, the [*Row.Scan] will return [ErrNoRows]. - * Otherwise, the [*Row.Scan] scans the first selected row and discards - * the rest. + * Close indicates whether to close the connection after + * replying to this request (for servers) or after sending this + * request and reading its response (for clients). + * + * For server requests, the HTTP server handles this automatically + * and this field is not needed by Handlers. + * + * For client requests, setting this field prevents re-use of + * TCP connections between requests to the same hosts, as if + * Transport.DisableKeepAlives were set. */ - queryRowContext(ctx: context.Context, query: string, ...args: any[]): (Row) - } - interface Tx { + close: boolean /** - * QueryRow executes a query that is expected to return at most one row. - * QueryRow always returns a non-nil value. Errors are deferred until - * [Row]'s Scan method is called. - * If the query selects no rows, the [*Row.Scan] will return [ErrNoRows]. - * Otherwise, the [*Row.Scan] scans the first selected row and discards - * the rest. + * For server requests, Host specifies the host on which the + * URL is sought. For HTTP/1 (per RFC 7230, section 5.4), this + * is either the value of the "Host" header or the host name + * given in the URL itself. For HTTP/2, it is the value of the + * ":authority" pseudo-header field. + * It may be of the form "host:port". For international domain + * names, Host may be in Punycode or Unicode form. Use + * golang.org/x/net/idna to convert it to either format if + * needed. + * To prevent DNS rebinding attacks, server Handlers should + * validate that the Host header has a value for which the + * Handler considers itself authoritative. The included + * ServeMux supports patterns registered to particular host + * names and thus protects its registered Handlers. * - * QueryRow uses [context.Background] internally; to specify the context, use - * [Tx.QueryRowContext]. + * For client requests, Host optionally overrides the Host + * header to send. If empty, the Request.Write method uses + * the value of URL.Host. Host may contain an international + * domain name. */ - queryRow(query: string, ...args: any[]): (Row) - } - /** - * Stmt is a prepared statement. - * A Stmt is safe for concurrent use by multiple goroutines. - * - * If a Stmt is prepared on a [Tx] or [Conn], it will be bound to a single - * underlying connection forever. If the [Tx] or [Conn] closes, the Stmt will - * become unusable and all operations will return an error. - * If a Stmt is prepared on a [DB], it will remain usable for the lifetime of the - * [DB]. When the Stmt needs to execute on a new underlying connection, it will - * prepare itself on the new connection automatically. - */ - interface Stmt { - } - interface Stmt { + host: string /** - * ExecContext executes a prepared statement with the given arguments and - * returns a [Result] summarizing the effect of the statement. + * Form contains the parsed form data, including both the URL + * field's query parameters and the PATCH, POST, or PUT form data. + * This field is only available after ParseForm is called. + * The HTTP client ignores Form and uses Body instead. */ - execContext(ctx: context.Context, ...args: any[]): Result - } - interface Stmt { + form: url.Values /** - * Exec executes a prepared statement with the given arguments and - * returns a [Result] summarizing the effect of the statement. + * PostForm contains the parsed form data from PATCH, POST + * or PUT body parameters. * - * Exec uses [context.Background] internally; to specify the context, use - * [Stmt.ExecContext]. + * This field is only available after ParseForm is called. + * The HTTP client ignores PostForm and uses Body instead. */ - exec(...args: any[]): Result - } - interface Stmt { + postForm: url.Values /** - * QueryContext executes a prepared query statement with the given arguments - * and returns the query results as a [*Rows]. + * MultipartForm is the parsed multipart form, including file uploads. + * This field is only available after ParseMultipartForm is called. + * The HTTP client ignores MultipartForm and uses Body instead. */ - queryContext(ctx: context.Context, ...args: any[]): (Rows) - } - interface Stmt { + multipartForm?: multipart.Form /** - * Query executes a prepared query statement with the given arguments - * and returns the query results as a *Rows. + * Trailer specifies additional headers that are sent after the request + * body. * - * Query uses [context.Background] internally; to specify the context, use - * [Stmt.QueryContext]. + * For server requests, the Trailer map initially contains only the + * trailer keys, with nil values. (The client declares which trailers it + * will later send.) While the handler is reading from Body, it must + * not reference Trailer. After reading from Body returns EOF, Trailer + * can be read again and will contain non-nil values, if they were sent + * by the client. + * + * For client requests, Trailer must be initialized to a map containing + * the trailer keys to later send. The values may be nil or their final + * values. The ContentLength must be 0 or -1, to send a chunked request. + * After the HTTP request is sent the map values can be updated while + * the request body is read. Once the body returns EOF, the caller must + * not mutate Trailer. + * + * Few HTTP clients, servers, or proxies support HTTP trailers. */ - query(...args: any[]): (Rows) - } - interface Stmt { + trailer: Header /** - * QueryRowContext executes a prepared query statement with the given arguments. - * If an error occurs during the execution of the statement, that error will - * be returned by a call to Scan on the returned [*Row], which is always non-nil. - * If the query selects no rows, the [*Row.Scan] will return [ErrNoRows]. - * Otherwise, the [*Row.Scan] scans the first selected row and discards - * the rest. + * RemoteAddr allows HTTP servers and other software to record + * the network address that sent the request, usually for + * logging. This field is not filled in by ReadRequest and + * has no defined format. The HTTP server in this package + * sets RemoteAddr to an "IP:port" address before invoking a + * handler. + * This field is ignored by the HTTP client. */ - queryRowContext(ctx: context.Context, ...args: any[]): (Row) - } - interface Stmt { + remoteAddr: string /** - * QueryRow executes a prepared query statement with the given arguments. - * If an error occurs during the execution of the statement, that error will - * be returned by a call to Scan on the returned [*Row], which is always non-nil. - * If the query selects no rows, the [*Row.Scan] will return [ErrNoRows]. - * Otherwise, the [*Row.Scan] scans the first selected row and discards - * the rest. - * - * Example usage: + * RequestURI is the unmodified request-target of the + * Request-Line (RFC 7230, Section 3.1.1) as sent by the client + * to a server. Usually the URL field should be used instead. + * It is an error to set this field in an HTTP client request. + */ + requestURI: string + /** + * TLS allows HTTP servers and other software to record + * information about the TLS connection on which the request + * was received. This field is not filled in by ReadRequest. + * The HTTP server in this package sets the field for + * TLS-enabled connections before invoking a handler; + * otherwise it leaves the field nil. + * This field is ignored by the HTTP client. + */ + tls?: any + /** + * Cancel is an optional channel whose closure indicates that the client + * request should be regarded as canceled. Not all implementations of + * RoundTripper may support Cancel. * - * ``` - * var name string - * err := nameByUseridStmt.QueryRow(id).Scan(&name) - * ``` + * For server requests, this field is not applicable. * - * QueryRow uses [context.Background] internally; to specify the context, use - * [Stmt.QueryRowContext]. + * Deprecated: Set the Request's context with NewRequestWithContext + * instead. If a Request's Cancel field and context are both + * set, it is undefined whether Cancel is respected. */ - queryRow(...args: any[]): (Row) - } - interface Stmt { + cancel: undefined /** - * Close closes the statement. + * Response is the redirect response which caused this request + * to be created. This field is only populated during client + * redirects. */ - close(): void - } - /** - * Rows is the result of a query. Its cursor starts before the first row - * of the result set. Use [Rows.Next] to advance from row to row. - */ - interface Rows { + response?: Response + /** + * Pattern is the [ServeMux] pattern that matched the request. + * It is empty if the request was not matched against a pattern. + */ + pattern: string } - interface Rows { + interface Request { /** - * Next prepares the next result row for reading with the [Rows.Scan] method. It - * returns true on success, or false if there is no next result row or an error - * happened while preparing it. [Rows.Err] should be consulted to distinguish between - * the two cases. + * Context returns the request's context. To change the context, use + * [Request.Clone] or [Request.WithContext]. * - * Every call to [Rows.Scan], even the first one, must be preceded by a call to [Rows.Next]. + * The returned context is always non-nil; it defaults to the + * background context. + * + * For outgoing client requests, the context controls cancellation. + * + * For incoming server requests, the context is canceled when the + * client's connection closes, the request is canceled (with HTTP/2), + * or when the ServeHTTP method returns. */ - next(): boolean + context(): context.Context } - interface Rows { + interface Request { /** - * NextResultSet prepares the next result set for reading. It reports whether - * there is further result sets, or false if there is no further result set - * or if there is an error advancing to it. The [Rows.Err] method should be consulted - * to distinguish between the two cases. + * WithContext returns a shallow copy of r with its context changed + * to ctx. The provided ctx must be non-nil. * - * After calling NextResultSet, the [Rows.Next] method should always be called before - * scanning. If there are further result sets they may not have rows in the result - * set. + * For outgoing client request, the context controls the entire + * lifetime of a request and its response: obtaining a connection, + * sending the request, and reading the response headers and body. + * + * To create a new request with a context, use [NewRequestWithContext]. + * To make a deep copy of a request with a new context, use [Request.Clone]. */ - nextResultSet(): boolean + withContext(ctx: context.Context): (Request) } - interface Rows { + interface Request { /** - * Err returns the error, if any, that was encountered during iteration. - * Err may be called after an explicit or implicit [Rows.Close]. + * Clone returns a deep copy of r with its context changed to ctx. + * The provided ctx must be non-nil. + * + * Clone only makes a shallow copy of the Body field. + * + * For an outgoing client request, the context controls the entire + * lifetime of a request and its response: obtaining a connection, + * sending the request, and reading the response headers and body. */ - err(): void + clone(ctx: context.Context): (Request) } - interface Rows { + interface Request { /** - * Columns returns the column names. - * Columns returns an error if the rows are closed. + * ProtoAtLeast reports whether the HTTP protocol used + * in the request is at least major.minor. */ - columns(): Array + protoAtLeast(major: number, minor: number): boolean } - interface Rows { + interface Request { /** - * ColumnTypes returns column information such as column type, length, - * and nullable. Some information may not be available from some drivers. + * UserAgent returns the client's User-Agent, if sent in the request. */ - columnTypes(): Array<(ColumnType | undefined)> + userAgent(): string } - interface Rows { + interface Request { /** - * Scan copies the columns in the current row into the values pointed - * at by dest. The number of values in dest must be the same as the - * number of columns in [Rows]. - * - * Scan converts columns read from the database into the following - * common Go types and special types provided by the sql package: - * - * ``` - * *string - * *[]byte - * *int, *int8, *int16, *int32, *int64 - * *uint, *uint8, *uint16, *uint32, *uint64 - * *bool - * *float32, *float64 - * *interface{} - * *RawBytes - * *Rows (cursor value) - * any type implementing Scanner (see Scanner docs) - * ``` - * - * In the most simple case, if the type of the value from the source - * column is an integer, bool or string type T and dest is of type *T, - * Scan simply assigns the value through the pointer. - * - * Scan also converts between string and numeric types, as long as no - * information would be lost. While Scan stringifies all numbers - * scanned from numeric database columns into *string, scans into - * numeric types are checked for overflow. For example, a float64 with - * value 300 or a string with value "300" can scan into a uint16, but - * not into a uint8, though float64(255) or "255" can scan into a - * uint8. One exception is that scans of some float64 numbers to - * strings may lose information when stringifying. In general, scan - * floating point columns into *float64. - * - * If a dest argument has type *[]byte, Scan saves in that argument a - * copy of the corresponding data. The copy is owned by the caller and - * can be modified and held indefinitely. The copy can be avoided by - * using an argument of type [*RawBytes] instead; see the documentation - * for [RawBytes] for restrictions on its use. - * - * If an argument has type *interface{}, Scan copies the value - * provided by the underlying driver without conversion. When scanning - * from a source value of type []byte to *interface{}, a copy of the - * slice is made and the caller owns the result. - * - * Source values of type [time.Time] may be scanned into values of type - * *time.Time, *interface{}, *string, or *[]byte. When converting to - * the latter two, [time.RFC3339Nano] is used. - * - * Source values of type bool may be scanned into types *bool, - * *interface{}, *string, *[]byte, or [*RawBytes]. - * - * For scanning into *bool, the source may be true, false, 1, 0, or - * string inputs parseable by [strconv.ParseBool]. - * - * Scan can also convert a cursor returned from a query, such as - * "select cursor(select * from my_table) from dual", into a - * [*Rows] value that can itself be scanned from. The parent - * select query will close any cursor [*Rows] if the parent [*Rows] is closed. - * - * If any of the first arguments implementing [Scanner] returns an error, - * that error will be wrapped in the returned error. + * Cookies parses and returns the HTTP cookies sent with the request. */ - scan(...dest: any[]): void + cookies(): Array<(Cookie | undefined)> } - interface Rows { + interface Request { /** - * Close closes the [Rows], preventing further enumeration. If [Rows.Next] is called - * and returns false and there are no further result sets, - * the [Rows] are closed automatically and it will suffice to check the - * result of [Rows.Err]. Close is idempotent and does not affect the result of [Rows.Err]. + * CookiesNamed parses and returns the named HTTP cookies sent with the request + * or an empty slice if none matched. */ - close(): void + cookiesNamed(name: string): Array<(Cookie | undefined)> } - /** - * A Result summarizes an executed SQL command. - */ - interface Result { - [key:string]: any; + interface Request { /** - * LastInsertId returns the integer generated by the database - * in response to a command. Typically this will be from an - * "auto increment" column when inserting a new row. Not all - * databases support this feature, and the syntax of such - * statements varies. + * Cookie returns the named cookie provided in the request or + * [ErrNoCookie] if not found. + * If multiple cookies match the given name, only one cookie will + * be returned. */ - lastInsertId(): number + cookie(name: string): (Cookie) + } + interface Request { /** - * RowsAffected returns the number of rows affected by an - * update, insert, or delete. Not every database or database - * driver may support this. + * AddCookie adds a cookie to the request. Per RFC 6265 section 5.4, + * AddCookie does not attach more than one [Cookie] header field. That + * means all cookies, if any, are written into the same line, + * separated by semicolon. + * AddCookie only sanitizes c's name and value, and does not sanitize + * a Cookie header already present in the request. */ - rowsAffected(): number + addCookie(c: Cookie): void } -} - -/** - * Package syntax parses regular expressions into parse trees and compiles - * parse trees into programs. Most clients of regular expressions will use the - * facilities of package [regexp] (such as [regexp.Compile] and [regexp.Match]) instead of this package. - * - * # Syntax - * - * The regular expression syntax understood by this package when parsing with the [Perl] flag is as follows. - * Parts of the syntax can be disabled by passing alternate flags to [Parse]. - * - * Single characters: - * - * ``` - * . any character, possibly including newline (flag s=true) - * [xyz] character class - * [^xyz] negated character class - * \d Perl character class - * \D negated Perl character class - * [[:alpha:]] ASCII character class - * [[:^alpha:]] negated ASCII character class - * \pN Unicode character class (one-letter name) - * \p{Greek} Unicode character class - * \PN negated Unicode character class (one-letter name) - * \P{Greek} negated Unicode character class - * ``` - * - * Composites: - * - * ``` - * xy x followed by y - * x|y x or y (prefer x) - * ``` - * - * Repetitions: - * - * ``` - * x* zero or more x, prefer more - * x+ one or more x, prefer more - * x? zero or one x, prefer one - * x{n,m} n or n+1 or ... or m x, prefer more - * x{n,} n or more x, prefer more - * x{n} exactly n x - * x*? zero or more x, prefer fewer - * x+? one or more x, prefer fewer - * x?? zero or one x, prefer zero - * x{n,m}? n or n+1 or ... or m x, prefer fewer - * x{n,}? n or more x, prefer fewer - * x{n}? exactly n x - * ``` - * - * Implementation restriction: The counting forms x{n,m}, x{n,}, and x{n} - * reject forms that create a minimum or maximum repetition count above 1000. - * Unlimited repetitions are not subject to this restriction. - * - * Grouping: - * - * ``` - * (re) numbered capturing group (submatch) - * (?Pre) named & numbered capturing group (submatch) - * (?re) named & numbered capturing group (submatch) - * (?:re) non-capturing group - * (?flags) set flags within current group; non-capturing - * (?flags:re) set flags during re; non-capturing - * - * Flag syntax is xyz (set) or -xyz (clear) or xy-z (set xy, clear z). The flags are: - * - * i case-insensitive (default false) - * m multi-line mode: ^ and $ match begin/end line in addition to begin/end text (default false) - * s let . match \n (default false) - * U ungreedy: swap meaning of x* and x*?, x+ and x+?, etc (default false) - * ``` - * - * Empty strings: - * - * ``` - * ^ at beginning of text or line (flag m=true) - * $ at end of text (like \z not \Z) or line (flag m=true) - * \A at beginning of text - * \b at ASCII word boundary (\w on one side and \W, \A, or \z on the other) - * \B not at ASCII word boundary - * \z at end of text - * ``` - * - * Escape sequences: - * - * ``` - * \a bell (== \007) - * \f form feed (== \014) - * \t horizontal tab (== \011) - * \n newline (== \012) - * \r carriage return (== \015) - * \v vertical tab character (== \013) - * \* literal *, for any punctuation character * - * \123 octal character code (up to three digits) - * \x7F hex character code (exactly two digits) - * \x{10FFFF} hex character code - * \Q...\E literal text ... even if ... has punctuation - * ``` - * - * Character class elements: - * - * ``` - * x single character - * A-Z character range (inclusive) - * \d Perl character class - * [:foo:] ASCII character class foo - * \p{Foo} Unicode character class Foo - * \pF Unicode character class F (one-letter name) - * ``` - * - * Named character classes as character class elements: - * - * ``` - * [\d] digits (== \d) - * [^\d] not digits (== \D) - * [\D] not digits (== \D) - * [^\D] not not digits (== \d) - * [[:name:]] named ASCII class inside character class (== [:name:]) - * [^[:name:]] named ASCII class inside negated character class (== [:^name:]) - * [\p{Name}] named Unicode property inside character class (== \p{Name}) - * [^\p{Name}] named Unicode property inside negated character class (== \P{Name}) - * ``` - * - * Perl character classes (all ASCII-only): - * - * ``` - * \d digits (== [0-9]) - * \D not digits (== [^0-9]) - * \s whitespace (== [\t\n\f\r ]) - * \S not whitespace (== [^\t\n\f\r ]) - * \w word characters (== [0-9A-Za-z_]) - * \W not word characters (== [^0-9A-Za-z_]) - * ``` - * - * ASCII character classes: - * - * ``` - * [[:alnum:]] alphanumeric (== [0-9A-Za-z]) - * [[:alpha:]] alphabetic (== [A-Za-z]) - * [[:ascii:]] ASCII (== [\x00-\x7F]) - * [[:blank:]] blank (== [\t ]) - * [[:cntrl:]] control (== [\x00-\x1F\x7F]) - * [[:digit:]] digits (== [0-9]) - * [[:graph:]] graphical (== [!-~] == [A-Za-z0-9!"#$%&'()*+,\-./:;<=>?@[\\\]^_`{|}~]) - * [[:lower:]] lower case (== [a-z]) - * [[:print:]] printable (== [ -~] == [ [:graph:]]) - * [[:punct:]] punctuation (== [!-/:-@[-`{-~]) - * [[:space:]] whitespace (== [\t\n\v\f\r ]) - * [[:upper:]] upper case (== [A-Z]) - * [[:word:]] word characters (== [0-9A-Za-z_]) - * [[:xdigit:]] hex digit (== [0-9A-Fa-f]) - * ``` - * - * Unicode character classes are those in [unicode.Categories] and [unicode.Scripts]. - */ -namespace syntax { - /** - * Flags control the behavior of the parser and record information about regexp context. - */ - interface Flags extends Number{} -} - -/** - * Package net provides a portable interface for network I/O, including - * TCP/IP, UDP, domain name resolution, and Unix domain sockets. - * - * Although the package provides access to low-level networking - * primitives, most clients will need only the basic interface provided - * by the [Dial], [Listen], and Accept functions and the associated - * [Conn] and [Listener] interfaces. The crypto/tls package uses - * the same interfaces and similar Dial and Listen functions. - * - * The Dial function connects to a server: - * - * ``` - * conn, err := net.Dial("tcp", "golang.org:80") - * if err != nil { - * // handle error - * } - * fmt.Fprintf(conn, "GET / HTTP/1.0\r\n\r\n") - * status, err := bufio.NewReader(conn).ReadString('\n') - * // ... - * ``` - * - * The Listen function creates servers: - * - * ``` - * ln, err := net.Listen("tcp", ":8080") - * if err != nil { - * // handle error - * } - * for { - * conn, err := ln.Accept() - * if err != nil { - * // handle error - * } - * go handleConnection(conn) - * } - * ``` - * - * # Name Resolution - * - * The method for resolving domain names, whether indirectly with functions like Dial - * or directly with functions like [LookupHost] and [LookupAddr], varies by operating system. - * - * On Unix systems, the resolver has two options for resolving names. - * It can use a pure Go resolver that sends DNS requests directly to the servers - * listed in /etc/resolv.conf, or it can use a cgo-based resolver that calls C - * library routines such as getaddrinfo and getnameinfo. - * - * On Unix the pure Go resolver is preferred over the cgo resolver, because a blocked DNS - * request consumes only a goroutine, while a blocked C call consumes an operating system thread. - * When cgo is available, the cgo-based resolver is used instead under a variety of - * conditions: on systems that do not let programs make direct DNS requests (OS X), - * when the LOCALDOMAIN environment variable is present (even if empty), - * when the RES_OPTIONS or HOSTALIASES environment variable is non-empty, - * when the ASR_CONFIG environment variable is non-empty (OpenBSD only), - * when /etc/resolv.conf or /etc/nsswitch.conf specify the use of features that the - * Go resolver does not implement. - * - * On all systems (except Plan 9), when the cgo resolver is being used - * this package applies a concurrent cgo lookup limit to prevent the system - * from running out of system threads. Currently, it is limited to 500 concurrent lookups. - * - * The resolver decision can be overridden by setting the netdns value of the - * GODEBUG environment variable (see package runtime) to go or cgo, as in: - * - * ``` - * export GODEBUG=netdns=go # force pure Go resolver - * export GODEBUG=netdns=cgo # force native resolver (cgo, win32) - * ``` - * - * The decision can also be forced while building the Go source tree - * by setting the netgo or netcgo build tag. - * - * A numeric netdns setting, as in GODEBUG=netdns=1, causes the resolver - * to print debugging information about its decisions. - * To force a particular resolver while also printing debugging information, - * join the two settings by a plus sign, as in GODEBUG=netdns=go+1. - * - * The Go resolver will send an EDNS0 additional header with a DNS request, - * to signal a willingness to accept a larger DNS packet size. - * This can reportedly cause sporadic failures with the DNS server run - * by some modems and routers. Setting GODEBUG=netedns0=0 will disable - * sending the additional header. - * - * On macOS, if Go code that uses the net package is built with - * -buildmode=c-archive, linking the resulting archive into a C program - * requires passing -lresolv when linking the C code. - * - * On Plan 9, the resolver always accesses /net/cs and /net/dns. - * - * On Windows, in Go 1.18.x and earlier, the resolver always used C - * library functions, such as GetAddrInfo and DnsQuery. - */ -namespace net { - /** - * Conn is a generic stream-oriented network connection. - * - * Multiple goroutines may invoke methods on a Conn simultaneously. - */ - interface Conn { - [key:string]: any; + interface Request { /** - * Read reads data from the connection. - * Read can be made to time out and return an error after a fixed - * time limit; see SetDeadline and SetReadDeadline. + * Referer returns the referring URL, if sent in the request. + * + * Referer is misspelled as in the request itself, a mistake from the + * earliest days of HTTP. This value can also be fetched from the + * [Header] map as Header["Referer"]; the benefit of making it available + * as a method is that the compiler can diagnose programs that use the + * alternate (correct English) spelling req.Referrer() but cannot + * diagnose programs that use Header["Referrer"]. */ - read(b: string|Array): number + referer(): string + } + interface Request { /** - * Write writes data to the connection. - * Write can be made to time out and return an error after a fixed - * time limit; see SetDeadline and SetWriteDeadline. + * MultipartReader returns a MIME multipart reader if this is a + * multipart/form-data or a multipart/mixed POST request, else returns nil and an error. + * Use this function instead of [Request.ParseMultipartForm] to + * process the request body as a stream. */ - write(b: string|Array): number + multipartReader(): (multipart.Reader) + } + interface Request { /** - * Close closes the connection. - * Any blocked Read or Write operations will be unblocked and return errors. + * Write writes an HTTP/1.1 request, which is the header and body, in wire format. + * This method consults the following fields of the request: + * + * ``` + * Host + * URL + * Method (defaults to "GET") + * Header + * ContentLength + * TransferEncoding + * Body + * ``` + * + * If Body is present, Content-Length is <= 0 and [Request.TransferEncoding] + * hasn't been set to "identity", Write adds "Transfer-Encoding: + * chunked" to the header. Body is closed after it is sent. */ - close(): void + write(w: io.Writer): void + } + interface Request { /** - * LocalAddr returns the local network address, if known. + * WriteProxy is like [Request.Write] but writes the request in the form + * expected by an HTTP proxy. In particular, [Request.WriteProxy] writes the + * initial Request-URI line of the request with an absolute URI, per + * section 5.3 of RFC 7230, including the scheme and host. + * In either case, WriteProxy also writes a Host header, using + * either r.Host or r.URL.Host. */ - localAddr(): Addr + writeProxy(w: io.Writer): void + } + interface Request { /** - * RemoteAddr returns the remote network address, if known. + * BasicAuth returns the username and password provided in the request's + * Authorization header, if the request uses HTTP Basic Authentication. + * See RFC 2617, Section 2. */ - remoteAddr(): Addr + basicAuth(): [string, string, boolean] + } + interface Request { /** - * SetDeadline sets the read and write deadlines associated - * with the connection. It is equivalent to calling both - * SetReadDeadline and SetWriteDeadline. + * SetBasicAuth sets the request's Authorization header to use HTTP + * Basic Authentication with the provided username and password. * - * A deadline is an absolute time after which I/O operations - * fail instead of blocking. The deadline applies to all future - * and pending I/O, not just the immediately following call to - * Read or Write. After a deadline has been exceeded, the - * connection can be refreshed by setting a deadline in the future. + * With HTTP Basic Authentication the provided username and password + * are not encrypted. It should generally only be used in an HTTPS + * request. * - * If the deadline is exceeded a call to Read or Write or to other - * I/O methods will return an error that wraps os.ErrDeadlineExceeded. - * This can be tested using errors.Is(err, os.ErrDeadlineExceeded). - * The error's Timeout method will return true, but note that there - * are other possible errors for which the Timeout method will - * return true even if the deadline has not been exceeded. + * The username may not contain a colon. Some protocols may impose + * additional requirements on pre-escaping the username and + * password. For instance, when used with OAuth2, both arguments must + * be URL encoded first with [url.QueryEscape]. + */ + setBasicAuth(username: string, password: string): void + } + interface Request { + /** + * ParseForm populates r.Form and r.PostForm. * - * An idle timeout can be implemented by repeatedly extending - * the deadline after successful Read or Write calls. + * For all requests, ParseForm parses the raw query from the URL and updates + * r.Form. * - * A zero value for t means I/O operations will not time out. + * For POST, PUT, and PATCH requests, it also reads the request body, parses it + * as a form and puts the results into both r.PostForm and r.Form. Request body + * parameters take precedence over URL query string values in r.Form. + * + * If the request Body's size has not already been limited by [MaxBytesReader], + * the size is capped at 10MB. + * + * For other HTTP methods, or when the Content-Type is not + * application/x-www-form-urlencoded, the request Body is not read, and + * r.PostForm is initialized to a non-nil, empty value. + * + * [Request.ParseMultipartForm] calls ParseForm automatically. + * ParseForm is idempotent. */ - setDeadline(t: time.Time): void + parseForm(): void + } + interface Request { /** - * SetReadDeadline sets the deadline for future Read calls - * and any currently-blocked Read call. - * A zero value for t means Read will not time out. + * ParseMultipartForm parses a request body as multipart/form-data. + * The whole request body is parsed and up to a total of maxMemory bytes of + * its file parts are stored in memory, with the remainder stored on + * disk in temporary files. + * ParseMultipartForm calls [Request.ParseForm] if necessary. + * If ParseForm returns an error, ParseMultipartForm returns it but also + * continues parsing the request body. + * After one call to ParseMultipartForm, subsequent calls have no effect. */ - setReadDeadline(t: time.Time): void + parseMultipartForm(maxMemory: number): void + } + interface Request { /** - * SetWriteDeadline sets the deadline for future Write calls - * and any currently-blocked Write call. - * Even if write times out, it may return n > 0, indicating that - * some of the data was successfully written. - * A zero value for t means Write will not time out. + * FormValue returns the first value for the named component of the query. + * The precedence order: + * 1. application/x-www-form-urlencoded form body (POST, PUT, PATCH only) + * 2. query parameters (always) + * 3. multipart/form-data form body (always) + * + * FormValue calls [Request.ParseMultipartForm] and [Request.ParseForm] + * if necessary and ignores any errors returned by these functions. + * If key is not present, FormValue returns the empty string. + * To access multiple values of the same key, call ParseForm and + * then inspect [Request.Form] directly. */ - setWriteDeadline(t: time.Time): void + formValue(key: string): string } -} - -/** - * Package multipart implements MIME multipart parsing, as defined in RFC - * 2046. - * - * The implementation is sufficient for HTTP (RFC 2388) and the multipart - * bodies generated by popular browsers. - * - * # Limits - * - * To protect against malicious inputs, this package sets limits on the size - * of the MIME data it processes. - * - * [Reader.NextPart] and [Reader.NextRawPart] limit the number of headers in a - * part to 10000 and [Reader.ReadForm] limits the total number of headers in all - * FileHeaders to 10000. - * These limits may be adjusted with the GODEBUG=multipartmaxheaders= - * setting. - * - * Reader.ReadForm further limits the number of parts in a form to 1000. - * This limit may be adjusted with the GODEBUG=multipartmaxparts= - * setting. - */ -namespace multipart { - /** - * A FileHeader describes a file part of a multipart request. - */ - interface FileHeader { - filename: string - header: textproto.MIMEHeader - size: number + interface Request { + /** + * PostFormValue returns the first value for the named component of the POST, + * PUT, or PATCH request body. URL query parameters are ignored. + * PostFormValue calls [Request.ParseMultipartForm] and [Request.ParseForm] if necessary and ignores + * any errors returned by these functions. + * If key is not present, PostFormValue returns the empty string. + */ + postFormValue(key: string): string } - interface FileHeader { + interface Request { /** - * Open opens and returns the [FileHeader]'s associated File. + * FormFile returns the first file for the provided form key. + * FormFile calls [Request.ParseMultipartForm] and [Request.ParseForm] if necessary. */ - open(): File + formFile(key: string): [multipart.File, (multipart.FileHeader)] } -} - -/** - * Package http provides HTTP client and server implementations. - * - * [Get], [Head], [Post], and [PostForm] make HTTP (or HTTPS) requests: - * - * ``` - * resp, err := http.Get("http://example.com/") - * ... - * resp, err := http.Post("http://example.com/upload", "image/jpeg", &buf) - * ... - * resp, err := http.PostForm("http://example.com/form", - * url.Values{"key": {"Value"}, "id": {"123"}}) - * ``` - * - * The caller must close the response body when finished with it: - * - * ``` - * resp, err := http.Get("http://example.com/") - * if err != nil { - * // handle error - * } - * defer resp.Body.Close() - * body, err := io.ReadAll(resp.Body) - * // ... - * ``` - * - * # Clients and Transports - * - * For control over HTTP client headers, redirect policy, and other - * settings, create a [Client]: - * - * ``` - * client := &http.Client{ - * CheckRedirect: redirectPolicyFunc, - * } - * - * resp, err := client.Get("http://example.com") - * // ... - * - * req, err := http.NewRequest("GET", "http://example.com", nil) - * // ... - * req.Header.Add("If-None-Match", `W/"wyzzy"`) - * resp, err := client.Do(req) - * // ... - * ``` - * - * For control over proxies, TLS configuration, keep-alives, - * compression, and other settings, create a [Transport]: - * - * ``` - * tr := &http.Transport{ - * MaxIdleConns: 10, - * IdleConnTimeout: 30 * time.Second, - * DisableCompression: true, - * } - * client := &http.Client{Transport: tr} - * resp, err := client.Get("https://example.com") - * ``` - * - * Clients and Transports are safe for concurrent use by multiple - * goroutines and for efficiency should only be created once and re-used. - * - * # Servers - * - * ListenAndServe starts an HTTP server with a given address and handler. - * The handler is usually nil, which means to use [DefaultServeMux]. - * [Handle] and [HandleFunc] add handlers to [DefaultServeMux]: - * - * ``` - * http.Handle("/foo", fooHandler) - * - * http.HandleFunc("/bar", func(w http.ResponseWriter, r *http.Request) { - * fmt.Fprintf(w, "Hello, %q", html.EscapeString(r.URL.Path)) - * }) - * - * log.Fatal(http.ListenAndServe(":8080", nil)) - * ``` - * - * More control over the server's behavior is available by creating a - * custom Server: - * - * ``` - * s := &http.Server{ - * Addr: ":8080", - * Handler: myHandler, - * ReadTimeout: 10 * time.Second, - * WriteTimeout: 10 * time.Second, - * MaxHeaderBytes: 1 << 20, - * } - * log.Fatal(s.ListenAndServe()) - * ``` - * - * # HTTP/2 - * - * Starting with Go 1.6, the http package has transparent support for the - * HTTP/2 protocol when using HTTPS. Programs that must disable HTTP/2 - * can do so by setting [Transport.TLSNextProto] (for clients) or - * [Server.TLSNextProto] (for servers) to a non-nil, empty - * map. Alternatively, the following GODEBUG settings are - * currently supported: - * - * ``` - * GODEBUG=http2client=0 # disable HTTP/2 client support - * GODEBUG=http2server=0 # disable HTTP/2 server support - * GODEBUG=http2debug=1 # enable verbose HTTP/2 debug logs - * GODEBUG=http2debug=2 # ... even more verbose, with frame dumps - * ``` - * - * Please report any issues before disabling HTTP/2 support: https://golang.org/s/http2bug - * - * The http package's [Transport] and [Server] both automatically enable - * HTTP/2 support for simple configurations. To enable HTTP/2 for more - * complex configurations, to use lower-level HTTP/2 features, or to use - * a newer version of Go's http2 package, import "golang.org/x/net/http2" - * directly and use its ConfigureTransport and/or ConfigureServer - * functions. Manually configuring HTTP/2 via the golang.org/x/net/http2 - * package takes precedence over the net/http package's built-in HTTP/2 - * support. - */ -namespace http { - // @ts-ignore - import mathrand = rand - /** - * PushOptions describes options for [Pusher.Push]. - */ - interface PushOptions { + interface Request { /** - * Method specifies the HTTP method for the promised request. - * If set, it must be "GET" or "HEAD". Empty means "GET". + * PathValue returns the value for the named path wildcard in the [ServeMux] pattern + * that matched the request. + * It returns the empty string if the request was not matched against a pattern + * or there is no such wildcard in the pattern. */ - method: string + pathValue(name: string): string + } + interface Request { /** - * Header specifies additional promised request headers. This cannot - * include HTTP/2 pseudo header fields like ":path" and ":scheme", - * which will be added automatically. + * SetPathValue sets name to value, so that subsequent calls to r.PathValue(name) + * return value. */ - header: Header + setPathValue(name: string, value: string): void } - // @ts-ignore - import urlpkg = url /** - * A Request represents an HTTP request received by a server - * or to be sent by a client. + * A Handler responds to an HTTP request. * - * The field semantics differ slightly between client and server - * usage. In addition to the notes on the fields below, see the - * documentation for [Request.Write] and [RoundTripper]. + * [Handler.ServeHTTP] should write reply headers and data to the [ResponseWriter] + * and then return. Returning signals that the request is finished; it + * is not valid to use the [ResponseWriter] or read from the + * [Request.Body] after or concurrently with the completion of the + * ServeHTTP call. + * + * Depending on the HTTP client software, HTTP protocol version, and + * any intermediaries between the client and the Go server, it may not + * be possible to read from the [Request.Body] after writing to the + * [ResponseWriter]. Cautious handlers should read the [Request.Body] + * first, and then reply. + * + * Except for reading the body, handlers should not modify the + * provided Request. + * + * If ServeHTTP panics, the server (the caller of ServeHTTP) assumes + * that the effect of the panic was isolated to the active request. + * It recovers the panic, logs a stack trace to the server error log, + * and either closes the network connection or sends an HTTP/2 + * RST_STREAM, depending on the HTTP protocol. To abort a handler so + * the client sees an interrupted response but the server doesn't log + * an error, panic with the value [ErrAbortHandler]. */ - interface Request { - /** - * Method specifies the HTTP method (GET, POST, PUT, etc.). - * For client requests, an empty string means GET. - */ - method: string + interface Handler { + [key:string]: any; + serveHTTP(_arg0: ResponseWriter, _arg1: Request): void + } + /** + * A ResponseWriter interface is used by an HTTP handler to + * construct an HTTP response. + * + * A ResponseWriter may not be used after [Handler.ServeHTTP] has returned. + */ + interface ResponseWriter { + [key:string]: any; /** - * URL specifies either the URI being requested (for server - * requests) or the URL to access (for client requests). + * Header returns the header map that will be sent by + * [ResponseWriter.WriteHeader]. The [Header] map also is the mechanism with which + * [Handler] implementations can set HTTP trailers. * - * For server requests, the URL is parsed from the URI - * supplied on the Request-Line as stored in RequestURI. For - * most requests, fields other than Path and RawQuery will be - * empty. (See RFC 7230, Section 5.3) + * Changing the header map after a call to [ResponseWriter.WriteHeader] (or + * [ResponseWriter.Write]) has no effect unless the HTTP status code was of the + * 1xx class or the modified headers are trailers. * - * For client requests, the URL's Host specifies the server to - * connect to, while the Request's Host field optionally - * specifies the Host header value to send in the HTTP - * request. - */ - url?: url.URL - /** - * The protocol version for incoming server requests. + * There are two ways to set Trailers. The preferred way is to + * predeclare in the headers which trailers you will later + * send by setting the "Trailer" header to the names of the + * trailer keys which will come later. In this case, those + * keys of the Header map are treated as if they were + * trailers. See the example. The second way, for trailer + * keys not known to the [Handler] until after the first [ResponseWriter.Write], + * is to prefix the [Header] map keys with the [TrailerPrefix] + * constant value. * - * For client requests, these fields are ignored. The HTTP - * client code always uses either HTTP/1.1 or HTTP/2. - * See the docs on Transport for details. + * To suppress automatic response headers (such as "Date"), set + * their value to nil. */ - proto: string // "HTTP/1.0" - protoMajor: number // 1 - protoMinor: number // 0 + header(): Header /** - * Header contains the request header fields either received - * by the server or to be sent by the client. - * - * If a server received a request with header lines, + * Write writes the data to the connection as part of an HTTP reply. * - * ``` - * Host: example.com - * accept-encoding: gzip, deflate - * Accept-Language: en-us - * fOO: Bar - * foo: two - * ``` + * If [ResponseWriter.WriteHeader] has not yet been called, Write calls + * WriteHeader(http.StatusOK) before writing the data. If the Header + * does not contain a Content-Type line, Write adds a Content-Type set + * to the result of passing the initial 512 bytes of written data to + * [DetectContentType]. Additionally, if the total size of all written + * data is under a few KB and there are no Flush calls, the + * Content-Length header is added automatically. * - * then + * Depending on the HTTP protocol version and the client, calling + * Write or WriteHeader may prevent future reads on the + * Request.Body. For HTTP/1.x requests, handlers should read any + * needed request body data before writing the response. Once the + * headers have been flushed (due to either an explicit Flusher.Flush + * call or writing enough data to trigger a flush), the request body + * may be unavailable. For HTTP/2 requests, the Go HTTP server permits + * handlers to continue to read the request body while concurrently + * writing the response. However, such behavior may not be supported + * by all HTTP/2 clients. Handlers should read before writing if + * possible to maximize compatibility. + */ + write(_arg0: string|Array): number + /** + * WriteHeader sends an HTTP response header with the provided + * status code. * - * ``` - * Header = map[string][]string{ - * "Accept-Encoding": {"gzip, deflate"}, - * "Accept-Language": {"en-us"}, - * "Foo": {"Bar", "two"}, - * } - * ``` + * If WriteHeader is not called explicitly, the first call to Write + * will trigger an implicit WriteHeader(http.StatusOK). + * Thus explicit calls to WriteHeader are mainly used to + * send error codes or 1xx informational responses. * - * For incoming requests, the Host header is promoted to the - * Request.Host field and removed from the Header map. + * The provided code must be a valid HTTP 1xx-5xx status code. + * Any number of 1xx headers may be written, followed by at most + * one 2xx-5xx header. 1xx headers are sent immediately, but 2xx-5xx + * headers may be buffered. Use the Flusher interface to send + * buffered data. The header map is cleared when 2xx-5xx headers are + * sent, but not with 1xx headers. * - * HTTP defines that header names are case-insensitive. The - * request parser implements this by using CanonicalHeaderKey, - * making the first character and any characters following a - * hyphen uppercase and the rest lowercase. - * - * For client requests, certain headers such as Content-Length - * and Connection are automatically written when needed and - * values in Header may be ignored. See the documentation - * for the Request.Write method. - */ - header: Header - /** - * Body is the request's body. - * - * For client requests, a nil body means the request has no - * body, such as a GET request. The HTTP Client's Transport - * is responsible for calling the Close method. - * - * For server requests, the Request Body is always non-nil - * but will return EOF immediately when no body is present. - * The Server will close the request body. The ServeHTTP - * Handler does not need to. - * - * Body must allow Read to be called concurrently with Close. - * In particular, calling Close should unblock a Read waiting - * for input. - */ - body: io.ReadCloser - /** - * GetBody defines an optional func to return a new copy of - * Body. It is used for client requests when a redirect requires - * reading the body more than once. Use of GetBody still - * requires setting Body. - * - * For server requests, it is unused. + * The server will automatically send a 100 (Continue) header + * on the first read from the request body if the request has + * an "Expect: 100-continue" header. */ - getBody: () => io.ReadCloser + writeHeader(statusCode: number): void + } + /** + * A Server defines parameters for running an HTTP server. + * The zero value for Server is a valid configuration. + */ + interface Server { /** - * ContentLength records the length of the associated content. - * The value -1 indicates that the length is unknown. - * Values >= 0 indicate that the given number of bytes may - * be read from Body. - * - * For client requests, a value of 0 with a non-nil Body is - * also treated as unknown. + * Addr optionally specifies the TCP address for the server to listen on, + * in the form "host:port". If empty, ":http" (port 80) is used. + * The service names are defined in RFC 6335 and assigned by IANA. + * See net.Dial for details of the address format. */ - contentLength: number + addr: string + handler: Handler // handler to invoke, http.DefaultServeMux if nil /** - * TransferEncoding lists the transfer encodings from outermost to - * innermost. An empty list denotes the "identity" encoding. - * TransferEncoding can usually be ignored; chunked encoding is - * automatically added and removed as necessary when sending and - * receiving requests. + * DisableGeneralOptionsHandler, if true, passes "OPTIONS *" requests to the Handler, + * otherwise responds with 200 OK and Content-Length: 0. */ - transferEncoding: Array + disableGeneralOptionsHandler: boolean /** - * Close indicates whether to close the connection after - * replying to this request (for servers) or after sending this - * request and reading its response (for clients). - * - * For server requests, the HTTP server handles this automatically - * and this field is not needed by Handlers. - * - * For client requests, setting this field prevents re-use of - * TCP connections between requests to the same hosts, as if - * Transport.DisableKeepAlives were set. + * TLSConfig optionally provides a TLS configuration for use + * by ServeTLS and ListenAndServeTLS. Note that this value is + * cloned by ServeTLS and ListenAndServeTLS, so it's not + * possible to modify the configuration with methods like + * tls.Config.SetSessionTicketKeys. To use + * SetSessionTicketKeys, use Server.Serve with a TLS Listener + * instead. */ - close: boolean + tlsConfig?: any /** - * For server requests, Host specifies the host on which the - * URL is sought. For HTTP/1 (per RFC 7230, section 5.4), this - * is either the value of the "Host" header or the host name - * given in the URL itself. For HTTP/2, it is the value of the - * ":authority" pseudo-header field. - * It may be of the form "host:port". For international domain - * names, Host may be in Punycode or Unicode form. Use - * golang.org/x/net/idna to convert it to either format if - * needed. - * To prevent DNS rebinding attacks, server Handlers should - * validate that the Host header has a value for which the - * Handler considers itself authoritative. The included - * ServeMux supports patterns registered to particular host - * names and thus protects its registered Handlers. + * ReadTimeout is the maximum duration for reading the entire + * request, including the body. A zero or negative value means + * there will be no timeout. * - * For client requests, Host optionally overrides the Host - * header to send. If empty, the Request.Write method uses - * the value of URL.Host. Host may contain an international - * domain name. + * Because ReadTimeout does not let Handlers make per-request + * decisions on each request body's acceptable deadline or + * upload rate, most users will prefer to use + * ReadHeaderTimeout. It is valid to use them both. */ - host: string + readTimeout: time.Duration /** - * Form contains the parsed form data, including both the URL - * field's query parameters and the PATCH, POST, or PUT form data. - * This field is only available after ParseForm is called. - * The HTTP client ignores Form and uses Body instead. + * ReadHeaderTimeout is the amount of time allowed to read + * request headers. The connection's read deadline is reset + * after reading the headers and the Handler can decide what + * is considered too slow for the body. If zero, the value of + * ReadTimeout is used. If negative, or if zero and ReadTimeout + * is zero or negative, there is no timeout. */ - form: url.Values + readHeaderTimeout: time.Duration /** - * PostForm contains the parsed form data from PATCH, POST - * or PUT body parameters. - * - * This field is only available after ParseForm is called. - * The HTTP client ignores PostForm and uses Body instead. + * WriteTimeout is the maximum duration before timing out + * writes of the response. It is reset whenever a new + * request's header is read. Like ReadTimeout, it does not + * let Handlers make decisions on a per-request basis. + * A zero or negative value means there will be no timeout. */ - postForm: url.Values + writeTimeout: time.Duration /** - * MultipartForm is the parsed multipart form, including file uploads. - * This field is only available after ParseMultipartForm is called. - * The HTTP client ignores MultipartForm and uses Body instead. + * IdleTimeout is the maximum amount of time to wait for the + * next request when keep-alives are enabled. If zero, the value + * of ReadTimeout is used. If negative, or if zero and ReadTimeout + * is zero or negative, there is no timeout. */ - multipartForm?: multipart.Form + idleTimeout: time.Duration /** - * Trailer specifies additional headers that are sent after the request - * body. - * - * For server requests, the Trailer map initially contains only the - * trailer keys, with nil values. (The client declares which trailers it - * will later send.) While the handler is reading from Body, it must - * not reference Trailer. After reading from Body returns EOF, Trailer - * can be read again and will contain non-nil values, if they were sent - * by the client. - * - * For client requests, Trailer must be initialized to a map containing - * the trailer keys to later send. The values may be nil or their final - * values. The ContentLength must be 0 or -1, to send a chunked request. - * After the HTTP request is sent the map values can be updated while - * the request body is read. Once the body returns EOF, the caller must - * not mutate Trailer. - * - * Few HTTP clients, servers, or proxies support HTTP trailers. + * MaxHeaderBytes controls the maximum number of bytes the + * server will read parsing the request header's keys and + * values, including the request line. It does not limit the + * size of the request body. + * If zero, DefaultMaxHeaderBytes is used. */ - trailer: Header + maxHeaderBytes: number /** - * RemoteAddr allows HTTP servers and other software to record - * the network address that sent the request, usually for - * logging. This field is not filled in by ReadRequest and - * has no defined format. The HTTP server in this package - * sets RemoteAddr to an "IP:port" address before invoking a - * handler. - * This field is ignored by the HTTP client. + * TLSNextProto optionally specifies a function to take over + * ownership of the provided TLS connection when an ALPN + * protocol upgrade has occurred. The map key is the protocol + * name negotiated. The Handler argument should be used to + * handle HTTP requests and will initialize the Request's TLS + * and RemoteAddr if not already set. The connection is + * automatically closed when the function returns. + * If TLSNextProto is not nil, HTTP/2 support is not enabled + * automatically. */ - remoteAddr: string + tlsNextProto: _TygojaDict /** - * RequestURI is the unmodified request-target of the - * Request-Line (RFC 7230, Section 3.1.1) as sent by the client - * to a server. Usually the URL field should be used instead. - * It is an error to set this field in an HTTP client request. + * ConnState specifies an optional callback function that is + * called when a client connection changes state. See the + * ConnState type and associated constants for details. */ - requestURI: string + connState: (_arg0: net.Conn, _arg1: ConnState) => void /** - * TLS allows HTTP servers and other software to record - * information about the TLS connection on which the request - * was received. This field is not filled in by ReadRequest. - * The HTTP server in this package sets the field for - * TLS-enabled connections before invoking a handler; - * otherwise it leaves the field nil. - * This field is ignored by the HTTP client. + * ErrorLog specifies an optional logger for errors accepting + * connections, unexpected behavior from handlers, and + * underlying FileSystem errors. + * If nil, logging is done via the log package's standard logger. */ - tls?: any + errorLog?: any /** - * Cancel is an optional channel whose closure indicates that the client - * request should be regarded as canceled. Not all implementations of - * RoundTripper may support Cancel. - * - * For server requests, this field is not applicable. - * - * Deprecated: Set the Request's context with NewRequestWithContext - * instead. If a Request's Cancel field and context are both - * set, it is undefined whether Cancel is respected. + * BaseContext optionally specifies a function that returns + * the base context for incoming requests on this server. + * The provided Listener is the specific Listener that's + * about to start accepting requests. + * If BaseContext is nil, the default is context.Background(). + * If non-nil, it must return a non-nil context. */ - cancel: undefined + baseContext: (_arg0: net.Listener) => context.Context /** - * Response is the redirect response which caused this request - * to be created. This field is only populated during client - * redirects. + * ConnContext optionally specifies a function that modifies + * the context used for a new connection c. The provided ctx + * is derived from the base context and has a ServerContextKey + * value. */ - response?: Response + connContext: (ctx: context.Context, c: net.Conn) => context.Context /** - * Pattern is the [ServeMux] pattern that matched the request. - * It is empty if the request was not matched against a pattern. + * HTTP2 configures HTTP/2 connections. + * + * This field does not yet have any effect. + * See https://go.dev/issue/67813. */ - pattern: string - } - interface Request { + http2?: HTTP2Config /** - * Context returns the request's context. To change the context, use - * [Request.Clone] or [Request.WithContext]. - * - * The returned context is always non-nil; it defaults to the - * background context. + * Protocols is the set of protocols accepted by the server. * - * For outgoing client requests, the context controls cancellation. + * If Protocols includes UnencryptedHTTP2, the server will accept + * unencrypted HTTP/2 connections. The server can serve both + * HTTP/1 and unencrypted HTTP/2 on the same address and port. * - * For incoming server requests, the context is canceled when the - * client's connection closes, the request is canceled (with HTTP/2), - * or when the ServeHTTP method returns. + * If Protocols is nil, the default is usually HTTP/1 and HTTP/2. + * If TLSNextProto is non-nil and does not contain an "h2" entry, + * the default is HTTP/1 only. */ - context(): context.Context + protocols?: Protocols } - interface Request { + interface Server { /** - * WithContext returns a shallow copy of r with its context changed - * to ctx. The provided ctx must be non-nil. + * Close immediately closes all active net.Listeners and any + * connections in state [StateNew], [StateActive], or [StateIdle]. For a + * graceful shutdown, use [Server.Shutdown]. * - * For outgoing client request, the context controls the entire - * lifetime of a request and its response: obtaining a connection, - * sending the request, and reading the response headers and body. + * Close does not attempt to close (and does not even know about) + * any hijacked connections, such as WebSockets. * - * To create a new request with a context, use [NewRequestWithContext]. - * To make a deep copy of a request with a new context, use [Request.Clone]. + * Close returns any error returned from closing the [Server]'s + * underlying Listener(s). */ - withContext(ctx: context.Context): (Request) + close(): void } - interface Request { + interface Server { /** - * Clone returns a deep copy of r with its context changed to ctx. - * The provided ctx must be non-nil. + * Shutdown gracefully shuts down the server without interrupting any + * active connections. Shutdown works by first closing all open + * listeners, then closing all idle connections, and then waiting + * indefinitely for connections to return to idle and then shut down. + * If the provided context expires before the shutdown is complete, + * Shutdown returns the context's error, otherwise it returns any + * error returned from closing the [Server]'s underlying Listener(s). * - * Clone only makes a shallow copy of the Body field. + * When Shutdown is called, [Serve], [ListenAndServe], and + * [ListenAndServeTLS] immediately return [ErrServerClosed]. Make sure the + * program doesn't exit and waits instead for Shutdown to return. * - * For an outgoing client request, the context controls the entire - * lifetime of a request and its response: obtaining a connection, - * sending the request, and reading the response headers and body. + * Shutdown does not attempt to close nor wait for hijacked + * connections such as WebSockets. The caller of Shutdown should + * separately notify such long-lived connections of shutdown and wait + * for them to close, if desired. See [Server.RegisterOnShutdown] for a way to + * register shutdown notification functions. + * + * Once Shutdown has been called on a server, it may not be reused; + * future calls to methods such as Serve will return ErrServerClosed. */ - clone(ctx: context.Context): (Request) + shutdown(ctx: context.Context): void } - interface Request { + interface Server { /** - * ProtoAtLeast reports whether the HTTP protocol used - * in the request is at least major.minor. + * RegisterOnShutdown registers a function to call on [Server.Shutdown]. + * This can be used to gracefully shutdown connections that have + * undergone ALPN protocol upgrade or that have been hijacked. + * This function should start protocol-specific graceful shutdown, + * but should not wait for shutdown to complete. */ - protoAtLeast(major: number, minor: number): boolean + registerOnShutdown(f: () => void): void } - interface Request { + interface Server { /** - * UserAgent returns the client's User-Agent, if sent in the request. + * ListenAndServe listens on the TCP network address s.Addr and then + * calls [Serve] to handle requests on incoming connections. + * Accepted connections are configured to enable TCP keep-alives. + * + * If s.Addr is blank, ":http" is used. + * + * ListenAndServe always returns a non-nil error. After [Server.Shutdown] or [Server.Close], + * the returned error is [ErrServerClosed]. */ - userAgent(): string + listenAndServe(): void } - interface Request { + interface Server { /** - * Cookies parses and returns the HTTP cookies sent with the request. + * Serve accepts incoming connections on the Listener l, creating a + * new service goroutine for each. The service goroutines read requests and + * then call s.Handler to reply to them. + * + * HTTP/2 support is only enabled if the Listener returns [*tls.Conn] + * connections and they were configured with "h2" in the TLS + * Config.NextProtos. + * + * Serve always returns a non-nil error and closes l. + * After [Server.Shutdown] or [Server.Close], the returned error is [ErrServerClosed]. */ - cookies(): Array<(Cookie | undefined)> + serve(l: net.Listener): void } - interface Request { + interface Server { /** - * CookiesNamed parses and returns the named HTTP cookies sent with the request - * or an empty slice if none matched. + * ServeTLS accepts incoming connections on the Listener l, creating a + * new service goroutine for each. The service goroutines perform TLS + * setup and then read requests, calling s.Handler to reply to them. + * + * Files containing a certificate and matching private key for the + * server must be provided if neither the [Server]'s + * TLSConfig.Certificates, TLSConfig.GetCertificate nor + * config.GetConfigForClient are populated. + * If the certificate is signed by a certificate authority, the + * certFile should be the concatenation of the server's certificate, + * any intermediates, and the CA's certificate. + * + * ServeTLS always returns a non-nil error. After [Server.Shutdown] or [Server.Close], the + * returned error is [ErrServerClosed]. */ - cookiesNamed(name: string): Array<(Cookie | undefined)> + serveTLS(l: net.Listener, certFile: string, keyFile: string): void } - interface Request { + interface Server { /** - * Cookie returns the named cookie provided in the request or - * [ErrNoCookie] if not found. - * If multiple cookies match the given name, only one cookie will - * be returned. + * SetKeepAlivesEnabled controls whether HTTP keep-alives are enabled. + * By default, keep-alives are always enabled. Only very + * resource-constrained environments or servers in the process of + * shutting down should disable them. */ - cookie(name: string): (Cookie) + setKeepAlivesEnabled(v: boolean): void } - interface Request { + interface Server { /** - * AddCookie adds a cookie to the request. Per RFC 6265 section 5.4, - * AddCookie does not attach more than one [Cookie] header field. That - * means all cookies, if any, are written into the same line, - * separated by semicolon. - * AddCookie only sanitizes c's name and value, and does not sanitize - * a Cookie header already present in the request. + * ListenAndServeTLS listens on the TCP network address s.Addr and + * then calls [ServeTLS] to handle requests on incoming TLS connections. + * Accepted connections are configured to enable TCP keep-alives. + * + * Filenames containing a certificate and matching private key for the + * server must be provided if neither the [Server]'s TLSConfig.Certificates + * nor TLSConfig.GetCertificate are populated. If the certificate is + * signed by a certificate authority, the certFile should be the + * concatenation of the server's certificate, any intermediates, and + * the CA's certificate. + * + * If s.Addr is blank, ":https" is used. + * + * ListenAndServeTLS always returns a non-nil error. After [Server.Shutdown] or + * [Server.Close], the returned error is [ErrServerClosed]. */ - addCookie(c: Cookie): void + listenAndServeTLS(certFile: string, keyFile: string): void } - interface Request { +} + +/** + * Package blob defines a lightweight abstration for interacting with + * various storage services (local filesystem, S3, etc.). + * + * NB! + * For compatibility with earlier PocketBase versions and to prevent + * unnecessary breaking changes, this package is based and implemented + * as a minimal, stripped down version of the previously used gocloud.dev/blob. + * While there is no promise that it won't diverge in the future to accommodate + * better some PocketBase specific use cases, currently it copies and + * tries to follow as close as possible the same implementations, + * conventions and rules for the key escaping/unescaping, blob read/write + * interfaces and struct options as gocloud.dev/blob, therefore the + * credits goes to the original Go Cloud Development Kit Authors. + */ +namespace blob { + /** + * ListObject represents a single blob returned from List. + */ + interface ListObject { /** - * Referer returns the referring URL, if sent in the request. - * - * Referer is misspelled as in the request itself, a mistake from the - * earliest days of HTTP. This value can also be fetched from the - * [Header] map as Header["Referer"]; the benefit of making it available - * as a method is that the compiler can diagnose programs that use the - * alternate (correct English) spelling req.Referrer() but cannot - * diagnose programs that use Header["Referrer"]. + * Key is the key for this blob. */ - referer(): string - } - interface Request { + key: string /** - * MultipartReader returns a MIME multipart reader if this is a - * multipart/form-data or a multipart/mixed POST request, else returns nil and an error. - * Use this function instead of [Request.ParseMultipartForm] to - * process the request body as a stream. + * ModTime is the time the blob was last modified. */ - multipartReader(): (multipart.Reader) - } - interface Request { + modTime: time.Time /** - * Write writes an HTTP/1.1 request, which is the header and body, in wire format. - * This method consults the following fields of the request: - * - * ``` - * Host - * URL - * Method (defaults to "GET") - * Header - * ContentLength - * TransferEncoding - * Body - * ``` - * - * If Body is present, Content-Length is <= 0 and [Request.TransferEncoding] - * hasn't been set to "identity", Write adds "Transfer-Encoding: - * chunked" to the header. Body is closed after it is sent. + * Size is the size of the blob's content in bytes. */ - write(w: io.Writer): void - } - interface Request { + size: number /** - * WriteProxy is like [Request.Write] but writes the request in the form - * expected by an HTTP proxy. In particular, [Request.WriteProxy] writes the - * initial Request-URI line of the request with an absolute URI, per - * section 5.3 of RFC 7230, including the scheme and host. - * In either case, WriteProxy also writes a Host header, using - * either r.Host or r.URL.Host. + * MD5 is an MD5 hash of the blob contents or nil if not available. */ - writeProxy(w: io.Writer): void - } - interface Request { + md5: string|Array /** - * BasicAuth returns the username and password provided in the request's - * Authorization header, if the request uses HTTP Basic Authentication. - * See RFC 2617, Section 2. + * IsDir indicates that this result represents a "directory" in the + * hierarchical namespace, ending in ListOptions.Delimiter. Key can be + * passed as ListOptions.Prefix to list items in the "directory". + * Fields other than Key and IsDir will not be set if IsDir is true. */ - basicAuth(): [string, string, boolean] + isDir: boolean } - interface Request { + /** + * Attributes contains attributes about a blob. + */ + interface Attributes { /** - * SetBasicAuth sets the request's Authorization header to use HTTP - * Basic Authentication with the provided username and password. - * - * With HTTP Basic Authentication the provided username and password - * are not encrypted. It should generally only be used in an HTTPS - * request. - * - * The username may not contain a colon. Some protocols may impose - * additional requirements on pre-escaping the username and - * password. For instance, when used with OAuth2, both arguments must - * be URL encoded first with [url.QueryEscape]. + * CacheControl specifies caching attributes that services may use + * when serving the blob. + * https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Cache-Control */ - setBasicAuth(username: string, password: string): void - } - interface Request { + cacheControl: string /** - * ParseForm populates r.Form and r.PostForm. - * - * For all requests, ParseForm parses the raw query from the URL and updates - * r.Form. - * - * For POST, PUT, and PATCH requests, it also reads the request body, parses it - * as a form and puts the results into both r.PostForm and r.Form. Request body - * parameters take precedence over URL query string values in r.Form. - * - * If the request Body's size has not already been limited by [MaxBytesReader], - * the size is capped at 10MB. - * - * For other HTTP methods, or when the Content-Type is not - * application/x-www-form-urlencoded, the request Body is not read, and - * r.PostForm is initialized to a non-nil, empty value. - * - * [Request.ParseMultipartForm] calls ParseForm automatically. - * ParseForm is idempotent. + * ContentDisposition specifies whether the blob content is expected to be + * displayed inline or as an attachment. + * https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Disposition */ - parseForm(): void - } - interface Request { + contentDisposition: string /** - * ParseMultipartForm parses a request body as multipart/form-data. - * The whole request body is parsed and up to a total of maxMemory bytes of - * its file parts are stored in memory, with the remainder stored on - * disk in temporary files. - * ParseMultipartForm calls [Request.ParseForm] if necessary. - * If ParseForm returns an error, ParseMultipartForm returns it but also - * continues parsing the request body. - * After one call to ParseMultipartForm, subsequent calls have no effect. + * ContentEncoding specifies the encoding used for the blob's content, if any. + * https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Encoding */ - parseMultipartForm(maxMemory: number): void - } - interface Request { + contentEncoding: string /** - * FormValue returns the first value for the named component of the query. - * The precedence order: - * 1. application/x-www-form-urlencoded form body (POST, PUT, PATCH only) - * 2. query parameters (always) - * 3. multipart/form-data form body (always) - * - * FormValue calls [Request.ParseMultipartForm] and [Request.ParseForm] - * if necessary and ignores any errors returned by these functions. - * If key is not present, FormValue returns the empty string. - * To access multiple values of the same key, call ParseForm and - * then inspect [Request.Form] directly. + * ContentLanguage specifies the language used in the blob's content, if any. + * https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Language */ - formValue(key: string): string - } - interface Request { + contentLanguage: string /** - * PostFormValue returns the first value for the named component of the POST, - * PUT, or PATCH request body. URL query parameters are ignored. - * PostFormValue calls [Request.ParseMultipartForm] and [Request.ParseForm] if necessary and ignores - * any errors returned by these functions. - * If key is not present, PostFormValue returns the empty string. + * ContentType is the MIME type of the blob. It will not be empty. + * https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Type */ - postFormValue(key: string): string - } - interface Request { + contentType: string /** - * FormFile returns the first file for the provided form key. - * FormFile calls [Request.ParseMultipartForm] and [Request.ParseForm] if necessary. + * Metadata holds key/value pairs associated with the blob. + * Keys are guaranteed to be in lowercase, even if the backend service + * has case-sensitive keys (although note that Metadata written via + * this package will always be lowercased). If there are duplicate + * case-insensitive keys (e.g., "foo" and "FOO"), only one value + * will be kept, and it is undefined which one. */ - formFile(key: string): [multipart.File, (multipart.FileHeader)] - } - interface Request { + metadata: _TygojaDict /** - * PathValue returns the value for the named path wildcard in the [ServeMux] pattern - * that matched the request. - * It returns the empty string if the request was not matched against a pattern - * or there is no such wildcard in the pattern. + * CreateTime is the time the blob was created, if available. If not available, + * CreateTime will be the zero time. */ - pathValue(name: string): string - } - interface Request { + createTime: time.Time /** - * SetPathValue sets name to value, so that subsequent calls to r.PathValue(name) - * return value. + * ModTime is the time the blob was last modified. */ - setPathValue(name: string, value: string): void - } - /** - * A Handler responds to an HTTP request. - * - * [Handler.ServeHTTP] should write reply headers and data to the [ResponseWriter] - * and then return. Returning signals that the request is finished; it - * is not valid to use the [ResponseWriter] or read from the - * [Request.Body] after or concurrently with the completion of the - * ServeHTTP call. - * - * Depending on the HTTP client software, HTTP protocol version, and - * any intermediaries between the client and the Go server, it may not - * be possible to read from the [Request.Body] after writing to the - * [ResponseWriter]. Cautious handlers should read the [Request.Body] - * first, and then reply. - * - * Except for reading the body, handlers should not modify the - * provided Request. - * - * If ServeHTTP panics, the server (the caller of ServeHTTP) assumes - * that the effect of the panic was isolated to the active request. - * It recovers the panic, logs a stack trace to the server error log, - * and either closes the network connection or sends an HTTP/2 - * RST_STREAM, depending on the HTTP protocol. To abort a handler so - * the client sees an interrupted response but the server doesn't log - * an error, panic with the value [ErrAbortHandler]. - */ - interface Handler { - [key:string]: any; - serveHTTP(_arg0: ResponseWriter, _arg1: Request): void - } - /** - * A ResponseWriter interface is used by an HTTP handler to - * construct an HTTP response. - * - * A ResponseWriter may not be used after [Handler.ServeHTTP] has returned. - */ - interface ResponseWriter { - [key:string]: any; + modTime: time.Time /** - * Header returns the header map that will be sent by - * [ResponseWriter.WriteHeader]. The [Header] map also is the mechanism with which - * [Handler] implementations can set HTTP trailers. - * - * Changing the header map after a call to [ResponseWriter.WriteHeader] (or - * [ResponseWriter.Write]) has no effect unless the HTTP status code was of the - * 1xx class or the modified headers are trailers. - * - * There are two ways to set Trailers. The preferred way is to - * predeclare in the headers which trailers you will later - * send by setting the "Trailer" header to the names of the - * trailer keys which will come later. In this case, those - * keys of the Header map are treated as if they were - * trailers. See the example. The second way, for trailer - * keys not known to the [Handler] until after the first [ResponseWriter.Write], - * is to prefix the [Header] map keys with the [TrailerPrefix] - * constant value. - * - * To suppress automatic response headers (such as "Date"), set - * their value to nil. + * Size is the size of the blob's content in bytes. */ - header(): Header + size: number /** - * Write writes the data to the connection as part of an HTTP reply. - * - * If [ResponseWriter.WriteHeader] has not yet been called, Write calls - * WriteHeader(http.StatusOK) before writing the data. If the Header - * does not contain a Content-Type line, Write adds a Content-Type set - * to the result of passing the initial 512 bytes of written data to - * [DetectContentType]. Additionally, if the total size of all written - * data is under a few KB and there are no Flush calls, the - * Content-Length header is added automatically. - * - * Depending on the HTTP protocol version and the client, calling - * Write or WriteHeader may prevent future reads on the - * Request.Body. For HTTP/1.x requests, handlers should read any - * needed request body data before writing the response. Once the - * headers have been flushed (due to either an explicit Flusher.Flush - * call or writing enough data to trigger a flush), the request body - * may be unavailable. For HTTP/2 requests, the Go HTTP server permits - * handlers to continue to read the request body while concurrently - * writing the response. However, such behavior may not be supported - * by all HTTP/2 clients. Handlers should read before writing if - * possible to maximize compatibility. + * MD5 is an MD5 hash of the blob contents or nil if not available. */ - write(_arg0: string|Array): number + md5: string|Array /** - * WriteHeader sends an HTTP response header with the provided - * status code. - * - * If WriteHeader is not called explicitly, the first call to Write - * will trigger an implicit WriteHeader(http.StatusOK). - * Thus explicit calls to WriteHeader are mainly used to - * send error codes or 1xx informational responses. - * - * The provided code must be a valid HTTP 1xx-5xx status code. - * Any number of 1xx headers may be written, followed by at most - * one 2xx-5xx header. 1xx headers are sent immediately, but 2xx-5xx - * headers may be buffered. Use the Flusher interface to send - * buffered data. The header map is cleared when 2xx-5xx headers are - * sent, but not with 1xx headers. - * - * The server will automatically send a 100 (Continue) header - * on the first read from the request body if the request has - * an "Expect: 100-continue" header. + * ETag for the blob; see https://en.wikipedia.org/wiki/HTTP_ETag. */ - writeHeader(statusCode: number): void + eTag: string } /** - * A Server defines parameters for running an HTTP server. - * The zero value for Server is a valid configuration. + * Reader reads bytes from a blob. + * It implements io.ReadSeekCloser, and must be closed after reads are finished. */ - interface Server { + interface Reader { + } + interface Reader { /** - * Addr optionally specifies the TCP address for the server to listen on, - * in the form "host:port". If empty, ":http" (port 80) is used. - * The service names are defined in RFC 6335 and assigned by IANA. - * See net.Dial for details of the address format. + * Read implements io.Reader (https://golang.org/pkg/io/#Reader). */ - addr: string - handler: Handler // handler to invoke, http.DefaultServeMux if nil + read(p: string|Array): number + } + interface Reader { /** - * DisableGeneralOptionsHandler, if true, passes "OPTIONS *" requests to the Handler, - * otherwise responds with 200 OK and Content-Length: 0. + * Seek implements io.Seeker (https://golang.org/pkg/io/#Seeker). */ - disableGeneralOptionsHandler: boolean + seek(offset: number, whence: number): number + } + interface Reader { /** - * TLSConfig optionally provides a TLS configuration for use - * by ServeTLS and ListenAndServeTLS. Note that this value is - * cloned by ServeTLS and ListenAndServeTLS, so it's not - * possible to modify the configuration with methods like - * tls.Config.SetSessionTicketKeys. To use - * SetSessionTicketKeys, use Server.Serve with a TLS Listener - * instead. + * Close implements io.Closer (https://golang.org/pkg/io/#Closer). */ - tlsConfig?: any + close(): void + } + interface Reader { /** - * ReadTimeout is the maximum duration for reading the entire - * request, including the body. A zero or negative value means - * there will be no timeout. - * - * Because ReadTimeout does not let Handlers make per-request - * decisions on each request body's acceptable deadline or - * upload rate, most users will prefer to use - * ReadHeaderTimeout. It is valid to use them both. + * ContentType returns the MIME type of the blob. */ - readTimeout: time.Duration + contentType(): string + } + interface Reader { /** - * ReadHeaderTimeout is the amount of time allowed to read - * request headers. The connection's read deadline is reset - * after reading the headers and the Handler can decide what - * is considered too slow for the body. If zero, the value of - * ReadTimeout is used. If negative, or if zero and ReadTimeout - * is zero or negative, there is no timeout. + * ModTime returns the time the blob was last modified. */ - readHeaderTimeout: time.Duration + modTime(): time.Time + } + interface Reader { /** - * WriteTimeout is the maximum duration before timing out - * writes of the response. It is reset whenever a new - * request's header is read. Like ReadTimeout, it does not - * let Handlers make decisions on a per-request basis. - * A zero or negative value means there will be no timeout. + * Size returns the size of the blob content in bytes. */ - writeTimeout: time.Duration - /** - * IdleTimeout is the maximum amount of time to wait for the - * next request when keep-alives are enabled. If zero, the value - * of ReadTimeout is used. If negative, or if zero and ReadTimeout - * is zero or negative, there is no timeout. - */ - idleTimeout: time.Duration + size(): number + } + interface Reader { /** - * MaxHeaderBytes controls the maximum number of bytes the - * server will read parsing the request header's keys and - * values, including the request line. It does not limit the - * size of the request body. - * If zero, DefaultMaxHeaderBytes is used. + * WriteTo reads from r and writes to w until there's no more data or + * an error occurs. + * The return value is the number of bytes written to w. + * + * It implements the io.WriterTo interface. */ - maxHeaderBytes: number + writeTo(w: io.Writer): number + } +} + +namespace store { + /** + * Store defines a concurrent safe in memory key-value data store. + */ + interface Store { + } + interface Store { /** - * TLSNextProto optionally specifies a function to take over - * ownership of the provided TLS connection when an ALPN - * protocol upgrade has occurred. The map key is the protocol - * name negotiated. The Handler argument should be used to - * handle HTTP requests and will initialize the Request's TLS - * and RemoteAddr if not already set. The connection is - * automatically closed when the function returns. - * If TLSNextProto is not nil, HTTP/2 support is not enabled - * automatically. + * Reset clears the store and replaces the store data with a + * shallow copy of the provided newData. */ - tlsNextProto: _TygojaDict + reset(newData: _TygojaDict): void + } + interface Store { /** - * ConnState specifies an optional callback function that is - * called when a client connection changes state. See the - * ConnState type and associated constants for details. + * Length returns the current number of elements in the store. */ - connState: (_arg0: net.Conn, _arg1: ConnState) => void + length(): number + } + interface Store { /** - * ErrorLog specifies an optional logger for errors accepting - * connections, unexpected behavior from handlers, and - * underlying FileSystem errors. - * If nil, logging is done via the log package's standard logger. + * RemoveAll removes all the existing store entries. */ - errorLog?: any + removeAll(): void + } + interface Store { /** - * BaseContext optionally specifies a function that returns - * the base context for incoming requests on this server. - * The provided Listener is the specific Listener that's - * about to start accepting requests. - * If BaseContext is nil, the default is context.Background(). - * If non-nil, it must return a non-nil context. + * Remove removes a single entry from the store. + * + * Remove does nothing if key doesn't exist in the store. */ - baseContext: (_arg0: net.Listener) => context.Context + remove(key: K): void + } + interface Store { /** - * ConnContext optionally specifies a function that modifies - * the context used for a new connection c. The provided ctx - * is derived from the base context and has a ServerContextKey - * value. + * Has checks if element with the specified key exist or not. */ - connContext: (ctx: context.Context, c: net.Conn) => context.Context + has(key: K): boolean } - interface Server { + interface Store { /** - * Close immediately closes all active net.Listeners and any - * connections in state [StateNew], [StateActive], or [StateIdle]. For a - * graceful shutdown, use [Server.Shutdown]. - * - * Close does not attempt to close (and does not even know about) - * any hijacked connections, such as WebSockets. + * Get returns a single element value from the store. * - * Close returns any error returned from closing the [Server]'s - * underlying Listener(s). + * If key is not set, the zero T value is returned. */ - close(): void + get(key: K): T } - interface Server { + interface Store { /** - * Shutdown gracefully shuts down the server without interrupting any - * active connections. Shutdown works by first closing all open - * listeners, then closing all idle connections, and then waiting - * indefinitely for connections to return to idle and then shut down. - * If the provided context expires before the shutdown is complete, - * Shutdown returns the context's error, otherwise it returns any - * error returned from closing the [Server]'s underlying Listener(s). - * - * When Shutdown is called, [Serve], [ListenAndServe], and - * [ListenAndServeTLS] immediately return [ErrServerClosed]. Make sure the - * program doesn't exit and waits instead for Shutdown to return. - * - * Shutdown does not attempt to close nor wait for hijacked - * connections such as WebSockets. The caller of Shutdown should - * separately notify such long-lived connections of shutdown and wait - * for them to close, if desired. See [Server.RegisterOnShutdown] for a way to - * register shutdown notification functions. - * - * Once Shutdown has been called on a server, it may not be reused; - * future calls to methods such as Serve will return ErrServerClosed. + * GetOk is similar to Get but returns also a boolean indicating whether the key exists or not. */ - shutdown(ctx: context.Context): void + getOk(key: K): [T, boolean] } - interface Server { + interface Store { /** - * RegisterOnShutdown registers a function to call on [Server.Shutdown]. - * This can be used to gracefully shutdown connections that have - * undergone ALPN protocol upgrade or that have been hijacked. - * This function should start protocol-specific graceful shutdown, - * but should not wait for shutdown to complete. + * GetAll returns a shallow copy of the current store data. */ - registerOnShutdown(f: () => void): void + getAll(): _TygojaDict } - interface Server { + interface Store { /** - * ListenAndServe listens on the TCP network address srv.Addr and then - * calls [Serve] to handle requests on incoming connections. - * Accepted connections are configured to enable TCP keep-alives. - * - * If srv.Addr is blank, ":http" is used. - * - * ListenAndServe always returns a non-nil error. After [Server.Shutdown] or [Server.Close], - * the returned error is [ErrServerClosed]. + * Values returns a slice with all of the current store values. */ - listenAndServe(): void + values(): Array } - interface Server { + interface Store { /** - * Serve accepts incoming connections on the Listener l, creating a - * new service goroutine for each. The service goroutines read requests and - * then call srv.Handler to reply to them. - * - * HTTP/2 support is only enabled if the Listener returns [*tls.Conn] - * connections and they were configured with "h2" in the TLS - * Config.NextProtos. - * - * Serve always returns a non-nil error and closes l. - * After [Server.Shutdown] or [Server.Close], the returned error is [ErrServerClosed]. + * Set sets (or overwrite if already exists) a new value for key. */ - serve(l: net.Listener): void + set(key: K, value: T): void } - interface Server { + interface Store { /** - * ServeTLS accepts incoming connections on the Listener l, creating a - * new service goroutine for each. The service goroutines perform TLS - * setup and then read requests, calling srv.Handler to reply to them. + * SetFunc sets (or overwrite if already exists) a new value resolved + * from the function callback for the provided key. * - * Files containing a certificate and matching private key for the - * server must be provided if neither the [Server]'s - * TLSConfig.Certificates, TLSConfig.GetCertificate nor - * config.GetConfigForClient are populated. - * If the certificate is signed by a certificate authority, the - * certFile should be the concatenation of the server's certificate, - * any intermediates, and the CA's certificate. + * The function callback receives as argument the old store element value (if exists). + * If there is no old store element, the argument will be the T zero value. * - * ServeTLS always returns a non-nil error. After [Server.Shutdown] or [Server.Close], the - * returned error is [ErrServerClosed]. + * Example: + * + * ``` + * s := store.New[string, int](nil) + * s.SetFunc("count", func(old int) int { + * return old + 1 + * }) + * ``` */ - serveTLS(l: net.Listener, certFile: string, keyFile: string): void + setFunc(key: K, fn: (old: T) => T): void } - interface Server { + interface Store { /** - * SetKeepAlivesEnabled controls whether HTTP keep-alives are enabled. - * By default, keep-alives are always enabled. Only very - * resource-constrained environments or servers in the process of - * shutting down should disable them. + * GetOrSet retrieves a single existing value for the provided key + * or stores a new one if it doesn't exist. */ - setKeepAlivesEnabled(v: boolean): void + getOrSet(key: K, setFunc: () => T): T } - interface Server { + interface Store { /** - * ListenAndServeTLS listens on the TCP network address srv.Addr and - * then calls [ServeTLS] to handle requests on incoming TLS connections. - * Accepted connections are configured to enable TCP keep-alives. - * - * Filenames containing a certificate and matching private key for the - * server must be provided if neither the [Server]'s TLSConfig.Certificates - * nor TLSConfig.GetCertificate are populated. If the certificate is - * signed by a certificate authority, the certFile should be the - * concatenation of the server's certificate, any intermediates, and - * the CA's certificate. + * SetIfLessThanLimit sets (or overwrite if already exist) a new value for key. * - * If srv.Addr is blank, ":https" is used. + * This method is similar to Set() but **it will skip adding new elements** + * to the store if the store length has reached the specified limit. + * false is returned if maxAllowedElements limit is reached. + */ + setIfLessThanLimit(key: K, value: T, maxAllowedElements: number): boolean + } + interface Store { + /** + * UnmarshalJSON implements [json.Unmarshaler] and imports the + * provided JSON data into the store. * - * ListenAndServeTLS always returns a non-nil error. After [Server.Shutdown] or - * [Server.Close], the returned error is [ErrServerClosed]. + * The store entries that match with the ones from the data will be overwritten with the new value. */ - listenAndServeTLS(certFile: string, keyFile: string): void + unmarshalJSON(data: string|Array): void + } + interface Store { + /** + * MarshalJSON implements [json.Marshaler] and export the current + * store data into valid JSON. + */ + marshalJSON(): string|Array } } /** - * Package blob defines a lightweight abstration for interacting with - * various storage services (local filesystem, S3, etc.). + * Package sql provides a generic interface around SQL (or SQL-like) + * databases. * - * NB! - * For compatibility with earlier PocketBase versions and to prevent - * unnecessary breaking changes, this package is based and implemented - * as a minimal, stripped down version of the previously used gocloud.dev/blob. - * While there is no promise that it won't diverge in the future to accommodate - * better some PocketBase specific use cases, currently it copies and - * tries to follow as close as possible the same implementations, - * conventions and rules for the key escaping/unescaping, blob read/write - * interfaces and struct options as gocloud.dev/blob, therefore the - * credits goes to the original Go Cloud Development Kit Authors. + * The sql package must be used in conjunction with a database driver. + * See https://golang.org/s/sqldrivers for a list of drivers. + * + * Drivers that do not support context cancellation will not return until + * after the query is completed. + * + * For usage examples, see the wiki page at + * https://golang.org/s/sqlwiki. */ -namespace blob { +namespace sql { /** - * ListObject represents a single blob returned from List. + * TxOptions holds the transaction options to be used in [DB.BeginTx]. */ - interface ListObject { - /** - * Key is the key for this blob. - */ - key: string - /** - * ModTime is the time the blob was last modified. - */ - modTime: time.Time - /** - * Size is the size of the blob's content in bytes. - */ - size: number - /** - * MD5 is an MD5 hash of the blob contents or nil if not available. - */ - md5: string|Array + interface TxOptions { /** - * IsDir indicates that this result represents a "directory" in the - * hierarchical namespace, ending in ListOptions.Delimiter. Key can be - * passed as ListOptions.Prefix to list items in the "directory". - * Fields other than Key and IsDir will not be set if IsDir is true. + * Isolation is the transaction isolation level. + * If zero, the driver or database's default level is used. */ - isDir: boolean + isolation: IsolationLevel + readOnly: boolean } /** - * Attributes contains attributes about a blob. + * NullString represents a string that may be null. + * NullString implements the [Scanner] interface so + * it can be used as a scan destination: + * + * ``` + * var s NullString + * err := db.QueryRow("SELECT name FROM foo WHERE id=?", id).Scan(&s) + * ... + * if s.Valid { + * // use s.String + * } else { + * // NULL value + * } + * ``` */ - interface Attributes { - /** - * CacheControl specifies caching attributes that services may use - * when serving the blob. - * https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Cache-Control - */ - cacheControl: string - /** - * ContentDisposition specifies whether the blob content is expected to be - * displayed inline or as an attachment. - * https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Disposition - */ - contentDisposition: string - /** - * ContentEncoding specifies the encoding used for the blob's content, if any. - * https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Encoding - */ - contentEncoding: string - /** - * ContentLanguage specifies the language used in the blob's content, if any. - * https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Language - */ - contentLanguage: string - /** - * ContentType is the MIME type of the blob. It will not be empty. - * https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Type - */ - contentType: string - /** - * Metadata holds key/value pairs associated with the blob. - * Keys are guaranteed to be in lowercase, even if the backend service - * has case-sensitive keys (although note that Metadata written via - * this package will always be lowercased). If there are duplicate - * case-insensitive keys (e.g., "foo" and "FOO"), only one value - * will be kept, and it is undefined which one. - */ - metadata: _TygojaDict - /** - * CreateTime is the time the blob was created, if available. If not available, - * CreateTime will be the zero time. - */ - createTime: time.Time - /** - * ModTime is the time the blob was last modified. - */ - modTime: time.Time - /** - * Size is the size of the blob's content in bytes. - */ - size: number + interface NullString { + string: string + valid: boolean // Valid is true if String is not NULL + } + interface NullString { /** - * MD5 is an MD5 hash of the blob contents or nil if not available. + * Scan implements the [Scanner] interface. */ - md5: string|Array + scan(value: any): void + } + interface NullString { /** - * ETag for the blob; see https://en.wikipedia.org/wiki/HTTP_ETag. + * Value implements the [driver.Valuer] interface. */ - eTag: string + value(): any } /** - * Reader reads bytes from a blob. - * It implements io.ReadSeekCloser, and must be closed after reads are finished. + * DB is a database handle representing a pool of zero or more + * underlying connections. It's safe for concurrent use by multiple + * goroutines. + * + * The sql package creates and frees connections automatically; it + * also maintains a free pool of idle connections. If the database has + * a concept of per-connection state, such state can be reliably observed + * within a transaction ([Tx]) or connection ([Conn]). Once [DB.Begin] is called, the + * returned [Tx] is bound to a single connection. Once [Tx.Commit] or + * [Tx.Rollback] is called on the transaction, that transaction's + * connection is returned to [DB]'s idle connection pool. The pool size + * can be controlled with [DB.SetMaxIdleConns]. */ - interface Reader { + interface DB { } - interface Reader { + interface DB { /** - * Read implements io.Reader (https://golang.org/pkg/io/#Reader). + * PingContext verifies a connection to the database is still alive, + * establishing a connection if necessary. */ - read(p: string|Array): number + pingContext(ctx: context.Context): void } - interface Reader { + interface DB { /** - * Seek implements io.Seeker (https://golang.org/pkg/io/#Seeker). + * Ping verifies a connection to the database is still alive, + * establishing a connection if necessary. + * + * Ping uses [context.Background] internally; to specify the context, use + * [DB.PingContext]. */ - seek(offset: number, whence: number): number + ping(): void } - interface Reader { + interface DB { /** - * Close implements io.Closer (https://golang.org/pkg/io/#Closer). + * Close closes the database and prevents new queries from starting. + * Close then waits for all queries that have started processing on the server + * to finish. + * + * It is rare to Close a [DB], as the [DB] handle is meant to be + * long-lived and shared between many goroutines. */ close(): void } - interface Reader { + interface DB { /** - * ContentType returns the MIME type of the blob. + * SetMaxIdleConns sets the maximum number of connections in the idle + * connection pool. + * + * If MaxOpenConns is greater than 0 but less than the new MaxIdleConns, + * then the new MaxIdleConns will be reduced to match the MaxOpenConns limit. + * + * If n <= 0, no idle connections are retained. + * + * The default max idle connections is currently 2. This may change in + * a future release. */ - contentType(): string + setMaxIdleConns(n: number): void } - interface Reader { + interface DB { /** - * ModTime returns the time the blob was last modified. + * SetMaxOpenConns sets the maximum number of open connections to the database. + * + * If MaxIdleConns is greater than 0 and the new MaxOpenConns is less than + * MaxIdleConns, then MaxIdleConns will be reduced to match the new + * MaxOpenConns limit. + * + * If n <= 0, then there is no limit on the number of open connections. + * The default is 0 (unlimited). */ - modTime(): time.Time + setMaxOpenConns(n: number): void } - interface Reader { + interface DB { /** - * Size returns the size of the blob content in bytes. + * SetConnMaxLifetime sets the maximum amount of time a connection may be reused. + * + * Expired connections may be closed lazily before reuse. + * + * If d <= 0, connections are not closed due to a connection's age. */ - size(): number + setConnMaxLifetime(d: time.Duration): void } - interface Reader { + interface DB { /** - * WriteTo reads from r and writes to w until there's no more data or - * an error occurs. - * The return value is the number of bytes written to w. + * SetConnMaxIdleTime sets the maximum amount of time a connection may be idle. * - * It implements the io.WriterTo interface. + * Expired connections may be closed lazily before reuse. + * + * If d <= 0, connections are not closed due to a connection's idle time. */ - writeTo(w: io.Writer): number + setConnMaxIdleTime(d: time.Duration): void } -} - -namespace store { - /** - * Store defines a concurrent safe in memory key-value data store. - */ - interface Store { + interface DB { + /** + * Stats returns database statistics. + */ + stats(): DBStats } - interface Store { + interface DB { /** - * Reset clears the store and replaces the store data with a - * shallow copy of the provided newData. + * PrepareContext creates a prepared statement for later queries or executions. + * Multiple queries or executions may be run concurrently from the + * returned statement. + * The caller must call the statement's [*Stmt.Close] method + * when the statement is no longer needed. + * + * The provided context is used for the preparation of the statement, not for the + * execution of the statement. */ - reset(newData: _TygojaDict): void + prepareContext(ctx: context.Context, query: string): (Stmt) } - interface Store { + interface DB { /** - * Length returns the current number of elements in the store. + * Prepare creates a prepared statement for later queries or executions. + * Multiple queries or executions may be run concurrently from the + * returned statement. + * The caller must call the statement's [*Stmt.Close] method + * when the statement is no longer needed. + * + * Prepare uses [context.Background] internally; to specify the context, use + * [DB.PrepareContext]. */ - length(): number + prepare(query: string): (Stmt) } - interface Store { + interface DB { /** - * RemoveAll removes all the existing store entries. + * ExecContext executes a query without returning any rows. + * The args are for any placeholder parameters in the query. */ - removeAll(): void + execContext(ctx: context.Context, query: string, ...args: any[]): Result } - interface Store { + interface DB { /** - * Remove removes a single entry from the store. + * Exec executes a query without returning any rows. + * The args are for any placeholder parameters in the query. * - * Remove does nothing if key doesn't exist in the store. + * Exec uses [context.Background] internally; to specify the context, use + * [DB.ExecContext]. */ - remove(key: K): void + exec(query: string, ...args: any[]): Result } - interface Store { + interface DB { /** - * Has checks if element with the specified key exist or not. + * QueryContext executes a query that returns rows, typically a SELECT. + * The args are for any placeholder parameters in the query. */ - has(key: K): boolean + queryContext(ctx: context.Context, query: string, ...args: any[]): (Rows) } - interface Store { + interface DB { /** - * Get returns a single element value from the store. + * Query executes a query that returns rows, typically a SELECT. + * The args are for any placeholder parameters in the query. * - * If key is not set, the zero T value is returned. + * Query uses [context.Background] internally; to specify the context, use + * [DB.QueryContext]. */ - get(key: K): T + query(query: string, ...args: any[]): (Rows) } - interface Store { + interface DB { /** - * GetOk is similar to Get but returns also a boolean indicating whether the key exists or not. + * QueryRowContext executes a query that is expected to return at most one row. + * QueryRowContext always returns a non-nil value. Errors are deferred until + * [Row]'s Scan method is called. + * If the query selects no rows, the [*Row.Scan] will return [ErrNoRows]. + * Otherwise, [*Row.Scan] scans the first selected row and discards + * the rest. */ - getOk(key: K): [T, boolean] + queryRowContext(ctx: context.Context, query: string, ...args: any[]): (Row) } - interface Store { + interface DB { /** - * GetAll returns a shallow copy of the current store data. + * QueryRow executes a query that is expected to return at most one row. + * QueryRow always returns a non-nil value. Errors are deferred until + * [Row]'s Scan method is called. + * If the query selects no rows, the [*Row.Scan] will return [ErrNoRows]. + * Otherwise, [*Row.Scan] scans the first selected row and discards + * the rest. + * + * QueryRow uses [context.Background] internally; to specify the context, use + * [DB.QueryRowContext]. */ - getAll(): _TygojaDict + queryRow(query: string, ...args: any[]): (Row) } - interface Store { + interface DB { /** - * Values returns a slice with all of the current store values. + * BeginTx starts a transaction. + * + * The provided context is used until the transaction is committed or rolled back. + * If the context is canceled, the sql package will roll back + * the transaction. [Tx.Commit] will return an error if the context provided to + * BeginTx is canceled. + * + * The provided [TxOptions] is optional and may be nil if defaults should be used. + * If a non-default isolation level is used that the driver doesn't support, + * an error will be returned. */ - values(): Array + beginTx(ctx: context.Context, opts: TxOptions): (Tx) } - interface Store { + interface DB { /** - * Set sets (or overwrite if already exists) a new value for key. + * Begin starts a transaction. The default isolation level is dependent on + * the driver. + * + * Begin uses [context.Background] internally; to specify the context, use + * [DB.BeginTx]. */ - set(key: K, value: T): void + begin(): (Tx) } - interface Store { + interface DB { /** - * SetFunc sets (or overwrite if already exists) a new value resolved - * from the function callback for the provided key. - * - * The function callback receives as argument the old store element value (if exists). - * If there is no old store element, the argument will be the T zero value. - * - * Example: + * Driver returns the database's underlying driver. + */ + driver(): any + } + interface DB { + /** + * Conn returns a single connection by either opening a new connection + * or returning an existing connection from the connection pool. Conn will + * block until either a connection is returned or ctx is canceled. + * Queries run on the same Conn will be run in the same database session. * - * ``` - * s := store.New[string, int](nil) - * s.SetFunc("count", func(old int) int { - * return old + 1 - * }) - * ``` + * Every Conn must be returned to the database pool after use by + * calling [Conn.Close]. */ - setFunc(key: K, fn: (old: T) => T): void + conn(ctx: context.Context): (Conn) } - interface Store { + /** + * Tx is an in-progress database transaction. + * + * A transaction must end with a call to [Tx.Commit] or [Tx.Rollback]. + * + * After a call to [Tx.Commit] or [Tx.Rollback], all operations on the + * transaction fail with [ErrTxDone]. + * + * The statements prepared for a transaction by calling + * the transaction's [Tx.Prepare] or [Tx.Stmt] methods are closed + * by the call to [Tx.Commit] or [Tx.Rollback]. + */ + interface Tx { + } + interface Tx { /** - * GetOrSet retrieves a single existing value for the provided key - * or stores a new one if it doesn't exist. + * Commit commits the transaction. */ - getOrSet(key: K, setFunc: () => T): T + commit(): void } - interface Store { + interface Tx { /** - * SetIfLessThanLimit sets (or overwrite if already exist) a new value for key. + * Rollback aborts the transaction. + */ + rollback(): void + } + interface Tx { + /** + * PrepareContext creates a prepared statement for use within a transaction. * - * This method is similar to Set() but **it will skip adding new elements** - * to the store if the store length has reached the specified limit. - * false is returned if maxAllowedElements limit is reached. + * The returned statement operates within the transaction and will be closed + * when the transaction has been committed or rolled back. + * + * To use an existing prepared statement on this transaction, see [Tx.Stmt]. + * + * The provided context will be used for the preparation of the context, not + * for the execution of the returned statement. The returned statement + * will run in the transaction context. */ - setIfLessThanLimit(key: K, value: T, maxAllowedElements: number): boolean + prepareContext(ctx: context.Context, query: string): (Stmt) } - interface Store { + interface Tx { /** - * UnmarshalJSON implements [json.Unmarshaler] and imports the - * provided JSON data into the store. + * Prepare creates a prepared statement for use within a transaction. * - * The store entries that match with the ones from the data will be overwritten with the new value. + * The returned statement operates within the transaction and will be closed + * when the transaction has been committed or rolled back. + * + * To use an existing prepared statement on this transaction, see [Tx.Stmt]. + * + * Prepare uses [context.Background] internally; to specify the context, use + * [Tx.PrepareContext]. */ - unmarshalJSON(data: string|Array): void + prepare(query: string): (Stmt) } - interface Store { + interface Tx { /** - * MarshalJSON implements [json.Marshaler] and export the current - * store data into valid JSON. + * StmtContext returns a transaction-specific prepared statement from + * an existing statement. + * + * Example: + * + * ``` + * updateMoney, err := db.Prepare("UPDATE balance SET money=money+? WHERE id=?") + * ... + * tx, err := db.Begin() + * ... + * res, err := tx.StmtContext(ctx, updateMoney).Exec(123.45, 98293203) + * ``` + * + * The provided context is used for the preparation of the statement, not for the + * execution of the statement. + * + * The returned statement operates within the transaction and will be closed + * when the transaction has been committed or rolled back. */ - marshalJSON(): string|Array + stmtContext(ctx: context.Context, stmt: Stmt): (Stmt) } -} - -/** - * Package jwt is a Go implementation of JSON Web Tokens: http://self-issued.info/docs/draft-jones-json-web-token.html - * - * See README.md for more info. - */ -namespace jwt { - /** - * MapClaims is a claims type that uses the map[string]interface{} for JSON - * decoding. This is the default claims type if you don't supply one - */ - interface MapClaims extends _TygojaDict{} - interface MapClaims { + interface Tx { /** - * GetExpirationTime implements the Claims interface. + * Stmt returns a transaction-specific prepared statement from + * an existing statement. + * + * Example: + * + * ``` + * updateMoney, err := db.Prepare("UPDATE balance SET money=money+? WHERE id=?") + * ... + * tx, err := db.Begin() + * ... + * res, err := tx.Stmt(updateMoney).Exec(123.45, 98293203) + * ``` + * + * The returned statement operates within the transaction and will be closed + * when the transaction has been committed or rolled back. + * + * Stmt uses [context.Background] internally; to specify the context, use + * [Tx.StmtContext]. */ - getExpirationTime(): (NumericDate) + stmt(stmt: Stmt): (Stmt) } - interface MapClaims { + interface Tx { /** - * GetNotBefore implements the Claims interface. + * ExecContext executes a query that doesn't return rows. + * For example: an INSERT and UPDATE. */ - getNotBefore(): (NumericDate) + execContext(ctx: context.Context, query: string, ...args: any[]): Result } - interface MapClaims { + interface Tx { /** - * GetIssuedAt implements the Claims interface. + * Exec executes a query that doesn't return rows. + * For example: an INSERT and UPDATE. + * + * Exec uses [context.Background] internally; to specify the context, use + * [Tx.ExecContext]. */ - getIssuedAt(): (NumericDate) + exec(query: string, ...args: any[]): Result } - interface MapClaims { + interface Tx { /** - * GetAudience implements the Claims interface. + * QueryContext executes a query that returns rows, typically a SELECT. */ - getAudience(): ClaimStrings + queryContext(ctx: context.Context, query: string, ...args: any[]): (Rows) } - interface MapClaims { + interface Tx { /** - * GetIssuer implements the Claims interface. + * Query executes a query that returns rows, typically a SELECT. + * + * Query uses [context.Background] internally; to specify the context, use + * [Tx.QueryContext]. */ - getIssuer(): string + query(query: string, ...args: any[]): (Rows) } - interface MapClaims { + interface Tx { /** - * GetSubject implements the Claims interface. + * QueryRowContext executes a query that is expected to return at most one row. + * QueryRowContext always returns a non-nil value. Errors are deferred until + * [Row]'s Scan method is called. + * If the query selects no rows, the [*Row.Scan] will return [ErrNoRows]. + * Otherwise, the [*Row.Scan] scans the first selected row and discards + * the rest. */ - getSubject(): string + queryRowContext(ctx: context.Context, query: string, ...args: any[]): (Row) } -} - -namespace hook { - /** - * Event implements [Resolver] and it is intended to be used as a base - * Hook event that you can embed in your custom typed event structs. - * - * Example: - * - * ``` - * type CustomEvent struct { - * hook.Event - * - * SomeField int - * } - * ``` - */ - interface Event { - } - interface Event { - /** - * Next calls the next hook handler. - */ - next(): void - } - /** - * Handler defines a single Hook handler. - * Multiple handlers can share the same id. - * If Id is not explicitly set it will be autogenerated by Hook.Add and Hook.AddHandler. - */ - interface Handler { - /** - * Func defines the handler function to execute. - * - * Note that users need to call e.Next() in order to proceed with - * the execution of the hook chain. - */ - func: (_arg0: T) => void - /** - * Id is the unique identifier of the handler. - * - * It could be used later to remove the handler from a hook via [Hook.Remove]. - * - * If missing, an autogenerated value will be assigned when adding - * the handler to a hook. - */ - id: string + interface Tx { /** - * Priority allows changing the default exec priority of the handler within a hook. + * QueryRow executes a query that is expected to return at most one row. + * QueryRow always returns a non-nil value. Errors are deferred until + * [Row]'s Scan method is called. + * If the query selects no rows, the [*Row.Scan] will return [ErrNoRows]. + * Otherwise, the [*Row.Scan] scans the first selected row and discards + * the rest. * - * If 0, the handler will be executed in the same order it was registered. + * QueryRow uses [context.Background] internally; to specify the context, use + * [Tx.QueryRowContext]. */ - priority: number + queryRow(query: string, ...args: any[]): (Row) } /** - * Hook defines a generic concurrent safe structure for managing event hooks. - * - * When using custom event it must embed the base [hook.Event]. - * - * Example: - * - * ``` - * type CustomEvent struct { - * hook.Event - * SomeField int - * } - * - * h := Hook[*CustomEvent]{} - * - * h.BindFunc(func(e *CustomEvent) error { - * println(e.SomeField) - * - * return e.Next() - * }) + * Stmt is a prepared statement. + * A Stmt is safe for concurrent use by multiple goroutines. * - * h.Trigger(&CustomEvent{ SomeField: 123 }) - * ``` - */ - interface Hook { - } - /** - * TaggedHook defines a proxy hook which register handlers that are triggered only - * if the TaggedHook.tags are empty or includes at least one of the event data tag(s). - */ - type _sreGGOi = mainHook - interface TaggedHook extends _sreGGOi { - } -} - -/** - * Package types implements some commonly used db serializable types - * like datetime, json, etc. - */ -namespace types { - /** - * DateTime represents a [time.Time] instance in UTC that is wrapped - * and serialized using the app default date layout. + * If a Stmt is prepared on a [Tx] or [Conn], it will be bound to a single + * underlying connection forever. If the [Tx] or [Conn] closes, the Stmt will + * become unusable and all operations will return an error. + * If a Stmt is prepared on a [DB], it will remain usable for the lifetime of the + * [DB]. When the Stmt needs to execute on a new underlying connection, it will + * prepare itself on the new connection automatically. */ - interface DateTime { - } - interface DateTime { - /** - * Time returns the internal [time.Time] instance. - */ - time(): time.Time + interface Stmt { } - interface DateTime { + interface Stmt { /** - * Add returns a new DateTime based on the current DateTime + the specified duration. + * ExecContext executes a prepared statement with the given arguments and + * returns a [Result] summarizing the effect of the statement. */ - add(duration: time.Duration): DateTime + execContext(ctx: context.Context, ...args: any[]): Result } - interface DateTime { + interface Stmt { /** - * Sub returns a [time.Duration] by subtracting the specified DateTime from the current one. + * Exec executes a prepared statement with the given arguments and + * returns a [Result] summarizing the effect of the statement. * - * If the result exceeds the maximum (or minimum) value that can be stored in a [time.Duration], - * the maximum (or minimum) duration will be returned. + * Exec uses [context.Background] internally; to specify the context, use + * [Stmt.ExecContext]. */ - sub(u: DateTime): time.Duration + exec(...args: any[]): Result } - interface DateTime { + interface Stmt { /** - * AddDate returns a new DateTime based on the current one + duration. - * - * It follows the same rules as [time.AddDate]. + * QueryContext executes a prepared query statement with the given arguments + * and returns the query results as a [*Rows]. */ - addDate(years: number, months: number, days: number): DateTime + queryContext(ctx: context.Context, ...args: any[]): (Rows) } - interface DateTime { + interface Stmt { /** - * After reports whether the current DateTime instance is after u. + * Query executes a prepared query statement with the given arguments + * and returns the query results as a *Rows. + * + * Query uses [context.Background] internally; to specify the context, use + * [Stmt.QueryContext]. */ - after(u: DateTime): boolean + query(...args: any[]): (Rows) } - interface DateTime { + interface Stmt { /** - * Before reports whether the current DateTime instance is before u. + * QueryRowContext executes a prepared query statement with the given arguments. + * If an error occurs during the execution of the statement, that error will + * be returned by a call to Scan on the returned [*Row], which is always non-nil. + * If the query selects no rows, the [*Row.Scan] will return [ErrNoRows]. + * Otherwise, the [*Row.Scan] scans the first selected row and discards + * the rest. */ - before(u: DateTime): boolean + queryRowContext(ctx: context.Context, ...args: any[]): (Row) } - interface DateTime { + interface Stmt { /** - * Compare compares the current DateTime instance with u. - * If the current instance is before u, it returns -1. - * If the current instance is after u, it returns +1. - * If they're the same, it returns 0. + * QueryRow executes a prepared query statement with the given arguments. + * If an error occurs during the execution of the statement, that error will + * be returned by a call to Scan on the returned [*Row], which is always non-nil. + * If the query selects no rows, the [*Row.Scan] will return [ErrNoRows]. + * Otherwise, the [*Row.Scan] scans the first selected row and discards + * the rest. + * + * Example usage: + * + * ``` + * var name string + * err := nameByUseridStmt.QueryRow(id).Scan(&name) + * ``` + * + * QueryRow uses [context.Background] internally; to specify the context, use + * [Stmt.QueryRowContext]. */ - compare(u: DateTime): number + queryRow(...args: any[]): (Row) } - interface DateTime { + interface Stmt { /** - * Equal reports whether the current DateTime and u represent the same time instant. - * Two DateTime can be equal even if they are in different locations. - * For example, 6:00 +0200 and 4:00 UTC are Equal. + * Close closes the statement. */ - equal(u: DateTime): boolean + close(): void } - interface DateTime { - /** - * Unix returns the current DateTime as a Unix time, aka. - * the number of seconds elapsed since January 1, 1970 UTC. - */ - unix(): number + /** + * Rows is the result of a query. Its cursor starts before the first row + * of the result set. Use [Rows.Next] to advance from row to row. + */ + interface Rows { } - interface DateTime { + interface Rows { /** - * IsZero checks whether the current DateTime instance has zero time value. + * Next prepares the next result row for reading with the [Rows.Scan] method. It + * returns true on success, or false if there is no next result row or an error + * happened while preparing it. [Rows.Err] should be consulted to distinguish between + * the two cases. + * + * Every call to [Rows.Scan], even the first one, must be preceded by a call to [Rows.Next]. */ - isZero(): boolean + next(): boolean } - interface DateTime { + interface Rows { /** - * String serializes the current DateTime instance into a formatted - * UTC date string. + * NextResultSet prepares the next result set for reading. It reports whether + * there is further result sets, or false if there is no further result set + * or if there is an error advancing to it. The [Rows.Err] method should be consulted + * to distinguish between the two cases. * - * The zero value is serialized to an empty string. + * After calling NextResultSet, the [Rows.Next] method should always be called before + * scanning. If there are further result sets they may not have rows in the result + * set. */ - string(): string + nextResultSet(): boolean } - interface DateTime { + interface Rows { /** - * MarshalJSON implements the [json.Marshaler] interface. + * Err returns the error, if any, that was encountered during iteration. + * Err may be called after an explicit or implicit [Rows.Close]. */ - marshalJSON(): string|Array + err(): void } - interface DateTime { + interface Rows { /** - * UnmarshalJSON implements the [json.Unmarshaler] interface. + * Columns returns the column names. + * Columns returns an error if the rows are closed. */ - unmarshalJSON(b: string|Array): void + columns(): Array } - interface DateTime { + interface Rows { /** - * Value implements the [driver.Valuer] interface. + * ColumnTypes returns column information such as column type, length, + * and nullable. Some information may not be available from some drivers. */ - value(): any + columnTypes(): Array<(ColumnType | undefined)> } - interface DateTime { + interface Rows { /** - * Scan implements [sql.Scanner] interface to scan the provided value - * into the current DateTime instance. - */ - scan(value: any): void - } - /** - * GeoPoint defines a struct for storing geo coordinates as serialized json object - * (e.g. {lon:0,lat:0}). - * - * Note: using object notation and not a plain array to avoid the confusion - * as there doesn't seem to be a fixed standard for the coordinates order. - */ - interface GeoPoint { - lon: number - lat: number - } - interface GeoPoint { - /** - * String returns the string representation of the current GeoPoint instance. + * Scan copies the columns in the current row into the values pointed + * at by dest. The number of values in dest must be the same as the + * number of columns in [Rows]. + * + * Scan converts columns read from the database into the following + * common Go types and special types provided by the sql package: + * + * ``` + * *string + * *[]byte + * *int, *int8, *int16, *int32, *int64 + * *uint, *uint8, *uint16, *uint32, *uint64 + * *bool + * *float32, *float64 + * *interface{} + * *RawBytes + * *Rows (cursor value) + * any type implementing Scanner (see Scanner docs) + * ``` + * + * In the most simple case, if the type of the value from the source + * column is an integer, bool or string type T and dest is of type *T, + * Scan simply assigns the value through the pointer. + * + * Scan also converts between string and numeric types, as long as no + * information would be lost. While Scan stringifies all numbers + * scanned from numeric database columns into *string, scans into + * numeric types are checked for overflow. For example, a float64 with + * value 300 or a string with value "300" can scan into a uint16, but + * not into a uint8, though float64(255) or "255" can scan into a + * uint8. One exception is that scans of some float64 numbers to + * strings may lose information when stringifying. In general, scan + * floating point columns into *float64. + * + * If a dest argument has type *[]byte, Scan saves in that argument a + * copy of the corresponding data. The copy is owned by the caller and + * can be modified and held indefinitely. The copy can be avoided by + * using an argument of type [*RawBytes] instead; see the documentation + * for [RawBytes] for restrictions on its use. + * + * If an argument has type *interface{}, Scan copies the value + * provided by the underlying driver without conversion. When scanning + * from a source value of type []byte to *interface{}, a copy of the + * slice is made and the caller owns the result. + * + * Source values of type [time.Time] may be scanned into values of type + * *time.Time, *interface{}, *string, or *[]byte. When converting to + * the latter two, [time.RFC3339Nano] is used. + * + * Source values of type bool may be scanned into types *bool, + * *interface{}, *string, *[]byte, or [*RawBytes]. + * + * For scanning into *bool, the source may be true, false, 1, 0, or + * string inputs parseable by [strconv.ParseBool]. + * + * Scan can also convert a cursor returned from a query, such as + * "select cursor(select * from my_table) from dual", into a + * [*Rows] value that can itself be scanned from. The parent + * select query will close any cursor [*Rows] if the parent [*Rows] is closed. + * + * If any of the first arguments implementing [Scanner] returns an error, + * that error will be wrapped in the returned error. */ - string(): string + scan(...dest: any[]): void } - interface GeoPoint { + interface Rows { /** - * AsMap implements [core.mapExtractor] and returns a value suitable - * to be used in an API rule expression. + * Close closes the [Rows], preventing further enumeration. If [Rows.Next] is called + * and returns false and there are no further result sets, + * the [Rows] are closed automatically and it will suffice to check the + * result of [Rows.Err]. Close is idempotent and does not affect the result of [Rows.Err]. */ - asMap(): _TygojaDict + close(): void } - interface GeoPoint { + /** + * A Result summarizes an executed SQL command. + */ + interface Result { + [key:string]: any; /** - * Value implements the [driver.Valuer] interface. + * LastInsertId returns the integer generated by the database + * in response to a command. Typically this will be from an + * "auto increment" column when inserting a new row. Not all + * databases support this feature, and the syntax of such + * statements varies. */ - value(): any - } - interface GeoPoint { + lastInsertId(): number /** - * Scan implements [sql.Scanner] interface to scan the provided value - * into the current GeoPoint instance. - * - * The value argument could be nil (no-op), another GeoPoint instance, - * map or serialized json object with lat-lon props. + * RowsAffected returns the number of rows affected by an + * update, insert, or delete. Not every database or database + * driver may support this. */ - scan(value: any): void + rowsAffected(): number } +} + +/** + * Package cron implements a crontab-like service to execute and schedule + * repeative tasks/jobs. + * + * Example: + * + * ``` + * c := cron.New() + * c.MustAdd("dailyReport", "0 0 * * *", func() { ... }) + * c.Start() + * ``` + */ +namespace cron { /** - * JSONArray defines a slice that is safe for json and db read/write. - */ - interface JSONArray extends Array{} - /** - * JSONMap defines a map that is safe for json and db read/write. - */ - interface JSONMap extends _TygojaDict{} - /** - * JSONRaw defines a json value type that is safe for db read/write. + * Cron is a crontab-like struct for tasks/jobs scheduling. */ - interface JSONRaw extends Array{} - interface JSONRaw { + interface Cron { + } + interface Cron { /** - * String returns the current JSONRaw instance as a json encoded string. + * SetInterval changes the current cron tick interval + * (it usually should be >= 1 minute). */ - string(): string + setInterval(d: time.Duration): void } - interface JSONRaw { + interface Cron { /** - * MarshalJSON implements the [json.Marshaler] interface. + * SetTimezone changes the current cron tick timezone. */ - marshalJSON(): string|Array + setTimezone(l: time.Location): void } - interface JSONRaw { + interface Cron { /** - * UnmarshalJSON implements the [json.Unmarshaler] interface. + * MustAdd is similar to Add() but panic on failure. */ - unmarshalJSON(b: string|Array): void + mustAdd(jobId: string, cronExpr: string, run: () => void): void } - interface JSONRaw { + interface Cron { /** - * Value implements the [driver.Valuer] interface. + * Add registers a single cron job. + * + * If there is already a job with the provided id, then the old job + * will be replaced with the new one. + * + * cronExpr is a regular cron expression, eg. "0 *\/3 * * *" (aka. at minute 0 past every 3rd hour). + * Check cron.NewSchedule() for the supported tokens. */ - value(): any + add(jobId: string, cronExpr: string, fn: () => void): void } - interface JSONRaw { + interface Cron { /** - * Scan implements [sql.Scanner] interface to scan the provided value - * into the current JSONRaw instance. + * Remove removes a single cron job by its id. */ - scan(value: any): void + remove(jobId: string): void } -} - -namespace search { - /** - * Result defines the returned search result structure. - */ - interface Result { - items: any - page: number - perPage: number - totalItems: number - totalPages: number + interface Cron { + /** + * RemoveAll removes all registered cron jobs. + */ + removeAll(): void } - /** - * ResolverResult defines a single FieldResolver.Resolve() successfully parsed result. - */ - interface ResolverResult { + interface Cron { /** - * Identifier is the plain SQL identifier/column that will be used - * in the final db expression as left or right operand. + * Total returns the current total number of registered cron jobs. */ - identifier: string + total(): number + } + interface Cron { /** - * NoCoalesce instructs to not use COALESCE or NULL fallbacks - * when building the identifier expression. + * Jobs returns a shallow copy of the currently registered cron jobs. */ - noCoalesce: boolean + jobs(): Array<(Job | undefined)> + } + interface Cron { /** - * Params is a map with db placeholder->value pairs that will be added - * to the query when building both resolved operands/sides in a single expression. + * Stop stops the current cron ticker (if not already). + * + * You can resume the ticker by calling Start(). */ - params: dbx.Params + stop(): void + } + interface Cron { /** - * MultiMatchSubQuery is an optional sub query expression that will be added - * in addition to the combined ResolverResult expression during build. + * Start starts the cron ticker. + * + * Calling Start() on already started cron will restart the ticker. */ - multiMatchSubQuery: dbx.Expression + start(): void + } + interface Cron { /** - * AfterBuild is an optional function that will be called after building - * and combining the result of both resolved operands/sides in a single expression. + * HasStarted checks whether the current Cron ticker has been started. */ - afterBuild: (expr: dbx.Expression) => dbx.Expression + hasStarted(): boolean } } -namespace router { - // @ts-ignore - import validation = ozzo_validation +namespace exec { /** - * ApiError defines the struct for a basic api error response. + * Cmd represents an external command being prepared or run. + * + * A Cmd cannot be reused after calling its [Cmd.Run], [Cmd.Output] or [Cmd.CombinedOutput] + * methods. */ - interface ApiError { - data: _TygojaDict - message: string - status: number - } - interface ApiError { - /** - * Error makes it compatible with the `error` interface. - */ - error(): string - } - interface ApiError { - /** - * RawData returns the unformatted error data (could be an internal error, text, etc.) - */ - rawData(): any - } - interface ApiError { - /** - * Is reports whether the current ApiError wraps the target. - */ - is(target: Error): boolean - } - /** - * Event specifies based Route handler event that is usually intended - * to be embedded as part of a custom event struct. - * - * NB! It is expected that the Response and Request fields are always set. - */ - type _sjmIjPM = hook.Event - interface Event extends _sjmIjPM { - response: http.ResponseWriter - request?: http.Request - } - interface Event { + interface Cmd { /** - * Written reports whether the current response has already been written. + * Path is the path of the command to run. * - * This method always returns false if e.ResponseWritter doesn't implement the WriteTracker interface - * (all router package handlers receives a ResponseWritter that implements it unless explicitly replaced with a custom one). + * This is the only field that must be set to a non-zero + * value. If Path is relative, it is evaluated relative + * to Dir. */ - written(): boolean - } - interface Event { + path: string /** - * Status reports the status code of the current response. + * Args holds command line arguments, including the command as Args[0]. + * If the Args field is empty or nil, Run uses {Path}. * - * This method always returns 0 if e.Response doesn't implement the StatusTracker interface - * (all router package handlers receives a ResponseWritter that implements it unless explicitly replaced with a custom one). + * In typical use, both Path and Args are set by calling Command. */ - status(): number - } - interface Event { + args: Array /** - * Flush flushes buffered data to the current response. + * Env specifies the environment of the process. + * Each entry is of the form "key=value". + * If Env is nil, the new process uses the current process's + * environment. + * If Env contains duplicate environment keys, only the last + * value in the slice for each duplicate key is used. + * As a special case on Windows, SYSTEMROOT is always added if + * missing and not explicitly set to the empty string. * - * Returns [http.ErrNotSupported] if e.Response doesn't implement the [http.Flusher] interface - * (all router package handlers receives a ResponseWritter that implements it unless explicitly replaced with a custom one). + * See also the Dir field, which may set PWD in the environment. */ - flush(): void - } - interface Event { + env: Array /** - * IsTLS reports whether the connection on which the request was received is TLS. + * Dir specifies the working directory of the command. + * If Dir is the empty string, Run runs the command in the + * calling process's current directory. + * + * On Unix systems, the value of Dir also determines the + * child process's PWD environment variable if not otherwise + * specified. A Unix process represents its working directory + * not by name but as an implicit reference to a node in the + * file tree. So, if the child process obtains its working + * directory by calling a function such as C's getcwd, which + * computes the canonical name by walking up the file tree, it + * will not recover the original value of Dir if that value + * was an alias involving symbolic links. However, if the + * child process calls Go's [os.Getwd] or GNU C's + * get_current_dir_name, and the value of PWD is an alias for + * the current directory, those functions will return the + * value of PWD, which matches the value of Dir. */ - isTLS(): boolean - } - interface Event { + dir: string /** - * SetCookie is an alias for [http.SetCookie]. + * Stdin specifies the process's standard input. * - * SetCookie adds a Set-Cookie header to the current response's headers. - * The provided cookie must have a valid Name. - * Invalid cookies may be silently dropped. + * If Stdin is nil, the process reads from the null device (os.DevNull). + * + * If Stdin is an *os.File, the process's standard input is connected + * directly to that file. + * + * Otherwise, during the execution of the command a separate + * goroutine reads from Stdin and delivers that data to the command + * over a pipe. In this case, Wait does not complete until the goroutine + * stops copying, either because it has reached the end of Stdin + * (EOF or a read error), or because writing to the pipe returned an error, + * or because a nonzero WaitDelay was set and expired. */ - setCookie(cookie: http.Cookie): void - } - interface Event { + stdin: io.Reader /** - * RemoteIP returns the IP address of the client that sent the request. + * Stdout and Stderr specify the process's standard output and error. * - * IPv6 addresses are returned expanded. - * For example, "2001:db8::1" becomes "2001:0db8:0000:0000:0000:0000:0000:0001". + * If either is nil, Run connects the corresponding file descriptor + * to the null device (os.DevNull). * - * Note that if you are behind reverse proxy(ies), this method returns - * the IP of the last connecting proxy. + * If either is an *os.File, the corresponding output from the process + * is connected directly to that file. + * + * Otherwise, during the execution of the command a separate goroutine + * reads from the process over a pipe and delivers that data to the + * corresponding Writer. In this case, Wait does not complete until the + * goroutine reaches EOF or encounters an error or a nonzero WaitDelay + * expires. + * + * If Stdout and Stderr are the same writer, and have a type that can + * be compared with ==, at most one goroutine at a time will call Write. */ - remoteIP(): string - } - interface Event { + stdout: io.Writer + stderr: io.Writer /** - * FindUploadedFiles extracts all form files of "key" from a http request - * and returns a slice with filesystem.File instances (if any). + * ExtraFiles specifies additional open files to be inherited by the + * new process. It does not include standard input, standard output, or + * standard error. If non-nil, entry i becomes file descriptor 3+i. + * + * ExtraFiles is not supported on Windows. */ - findUploadedFiles(key: string): Array<(filesystem.File | undefined)> - } - interface Event { + extraFiles: Array<(os.File | undefined)> /** - * Get retrieves single value from the current event data store. + * SysProcAttr holds optional, operating system-specific attributes. + * Run passes it to os.StartProcess as the os.ProcAttr's Sys field. */ - get(key: string): any - } - interface Event { + sysProcAttr?: syscall.SysProcAttr /** - * GetAll returns a copy of the current event data store. + * Process is the underlying process, once started. */ - getAll(): _TygojaDict - } - interface Event { + process?: os.Process /** - * Set saves single value into the current event data store. + * ProcessState contains information about an exited process. + * If the process was started successfully, Wait or Run will + * populate its ProcessState when the command completes. */ - set(key: string, value: any): void - } - interface Event { + processState?: os.ProcessState + err: Error // LookPath error, if any. /** - * SetAll saves all items from m into the current event data store. + * If Cancel is non-nil, the command must have been created with + * CommandContext and Cancel will be called when the command's + * Context is done. By default, CommandContext sets Cancel to + * call the Kill method on the command's Process. + * + * Typically a custom Cancel will send a signal to the command's + * Process, but it may instead take other actions to initiate cancellation, + * such as closing a stdin or stdout pipe or sending a shutdown request on a + * network socket. + * + * If the command exits with a success status after Cancel is + * called, and Cancel does not return an error equivalent to + * os.ErrProcessDone, then Wait and similar methods will return a non-nil + * error: either an error wrapping the one returned by Cancel, + * or the error from the Context. + * (If the command exits with a non-success status, or Cancel + * returns an error that wraps os.ErrProcessDone, Wait and similar methods + * continue to return the command's usual exit status.) + * + * If Cancel is set to nil, nothing will happen immediately when the command's + * Context is done, but a nonzero WaitDelay will still take effect. That may + * be useful, for example, to work around deadlocks in commands that do not + * support shutdown signals but are expected to always finish quickly. + * + * Cancel will not be called if Start returns a non-nil error. */ - setAll(m: _TygojaDict): void - } - interface Event { + cancel: () => void /** - * String writes a plain string response. + * If WaitDelay is non-zero, it bounds the time spent waiting on two sources + * of unexpected delay in Wait: a child process that fails to exit after the + * associated Context is canceled, and a child process that exits but leaves + * its I/O pipes unclosed. + * + * The WaitDelay timer starts when either the associated Context is done or a + * call to Wait observes that the child process has exited, whichever occurs + * first. When the delay has elapsed, the command shuts down the child process + * and/or its I/O pipes. + * + * If the child process has failed to exit — perhaps because it ignored or + * failed to receive a shutdown signal from a Cancel function, or because no + * Cancel function was set — then it will be terminated using os.Process.Kill. + * + * Then, if the I/O pipes communicating with the child process are still open, + * those pipes are closed in order to unblock any goroutines currently blocked + * on Read or Write calls. + * + * If pipes are closed due to WaitDelay, no Cancel call has occurred, + * and the command has otherwise exited with a successful status, Wait and + * similar methods will return ErrWaitDelay instead of nil. + * + * If WaitDelay is zero (the default), I/O pipes will be read until EOF, + * which might not occur until orphaned subprocesses of the command have + * also closed their descriptors for the pipes. */ - string(status: number, data: string): void + waitDelay: time.Duration } - interface Event { + interface Cmd { /** - * HTML writes an HTML response. + * String returns a human-readable description of c. + * It is intended only for debugging. + * In particular, it is not suitable for use as input to a shell. + * The output of String may vary across Go releases. */ - html(status: number, data: string): void + string(): string } - interface Event { + interface Cmd { /** - * JSON writes a JSON response. + * Run starts the specified command and waits for it to complete. * - * It also provides a generic response data fields picker if the "fields" query parameter is set. - * For example, if you are requesting `?fields=a,b` for `e.JSON(200, map[string]int{ "a":1, "b":2, "c":3 })`, - * it should result in a JSON response like: `{"a":1, "b": 2}`. + * The returned error is nil if the command runs, has no problems + * copying stdin, stdout, and stderr, and exits with a zero exit + * status. + * + * If the command starts but does not complete successfully, the error is of + * type [*ExitError]. Other error types may be returned for other situations. + * + * If the calling goroutine has locked the operating system thread + * with [runtime.LockOSThread] and modified any inheritable OS-level + * thread state (for example, Linux or Plan 9 name spaces), the new + * process will inherit the caller's thread state. */ - json(status: number, data: any): void + run(): void } - interface Event { + interface Cmd { /** - * XML writes an XML response. - * It automatically prepends the generic [xml.Header] string to the response. + * Start starts the specified command but does not wait for it to complete. + * + * If Start returns successfully, the c.Process field will be set. + * + * After a successful call to Start the [Cmd.Wait] method must be called in + * order to release associated system resources. */ - xml(status: number, data: any): void + start(): void } - interface Event { + interface Cmd { /** - * Stream streams the specified reader into the response. - */ - stream(status: number, contentType: string, reader: io.Reader): void - } - interface Event { - /** - * Blob writes a blob (bytes slice) response. - */ - blob(status: number, contentType: string, b: string|Array): void - } - interface Event { - /** - * FileFS serves the specified filename from fsys. - * - * It is similar to [echo.FileFS] for consistency with earlier versions. - */ - fileFS(fsys: fs.FS, filename: string): void - } - interface Event { - /** - * NoContent writes a response with no body (ex. 204). - */ - noContent(status: number): void - } - interface Event { - /** - * Redirect writes a redirect response to the specified url. - * The status code must be in between 300 – 399 range. - */ - redirect(status: number, url: string): void - } - interface Event { - error(status: number, message: string, errData: any): (ApiError) - } - interface Event { - badRequestError(message: string, errData: any): (ApiError) - } - interface Event { - notFoundError(message: string, errData: any): (ApiError) - } - interface Event { - forbiddenError(message: string, errData: any): (ApiError) - } - interface Event { - unauthorizedError(message: string, errData: any): (ApiError) - } - interface Event { - tooManyRequestsError(message: string, errData: any): (ApiError) - } - interface Event { - internalServerError(message: string, errData: any): (ApiError) - } - interface Event { - /** - * BindBody unmarshal the request body into the provided dst. - * - * dst must be either a struct pointer or map[string]any. - * - * The rules how the body will be scanned depends on the request Content-Type. + * Wait waits for the command to exit and waits for any copying to + * stdin or copying from stdout or stderr to complete. * - * Currently the following Content-Types are supported: - * ``` - * - application/json - * - text/xml, application/xml - * - multipart/form-data, application/x-www-form-urlencoded - * ``` + * The command must have been started by [Cmd.Start]. * - * Respectively the following struct tags are supported (again, which one will be used depends on the Content-Type): - * ``` - * - "json" (json body)- uses the builtin Go json package for unmarshaling. - * - "xml" (xml body) - uses the builtin Go xml package for unmarshaling. - * - "form" (form data) - utilizes the custom [router.UnmarshalRequestData] method. - * ``` + * The returned error is nil if the command runs, has no problems + * copying stdin, stdout, and stderr, and exits with a zero exit + * status. * - * NB! When dst is a struct make sure that it doesn't have public fields - * that shouldn't be bindable and it is advisible such fields to be unexported - * or have a separate struct just for the binding. For example: + * If the command fails to run or doesn't complete successfully, the + * error is of type [*ExitError]. Other error types may be + * returned for I/O problems. * - * ``` - * data := struct{ - * somethingPrivate string + * If any of c.Stdin, c.Stdout or c.Stderr are not an [*os.File], Wait also waits + * for the respective I/O loop copying to or from the process to complete. * - * Title string `json:"title" form:"title"` - * Total int `json:"total" form:"total"` - * } - * err := e.BindBody(&data) - * ``` + * Wait releases any resources associated with the [Cmd]. */ - bindBody(dst: any): void - } - /** - * Router defines a thin wrapper around the standard Go [http.ServeMux] by - * adding support for routing sub-groups, middlewares and other common utils. - * - * Example: - * - * ``` - * r := NewRouter[*MyEvent](eventFactory) - * - * // middlewares - * r.BindFunc(m1, m2) - * - * // routes - * r.GET("/test", handler1) - * - * // sub-routers/groups - * api := r.Group("/api") - * api.GET("/admins", handler2) - * - * // generate a http.ServeMux instance based on the router configurations - * mux, _ := r.BuildMux() - * - * http.ListenAndServe("localhost:8090", mux) - * ``` - */ - type _sOiUHJu = RouterGroup - interface Router extends _sOiUHJu { + wait(): void } -} - -namespace exec { - /** - * Cmd represents an external command being prepared or run. - * - * A Cmd cannot be reused after calling its [Cmd.Run], [Cmd.Output] or [Cmd.CombinedOutput] - * methods. - */ interface Cmd { /** - * Path is the path of the command to run. - * - * This is the only field that must be set to a non-zero - * value. If Path is relative, it is evaluated relative - * to Dir. - */ - path: string - /** - * Args holds command line arguments, including the command as Args[0]. - * If the Args field is empty or nil, Run uses {Path}. - * - * In typical use, both Path and Args are set by calling Command. - */ - args: Array - /** - * Env specifies the environment of the process. - * Each entry is of the form "key=value". - * If Env is nil, the new process uses the current process's - * environment. - * If Env contains duplicate environment keys, only the last - * value in the slice for each duplicate key is used. - * As a special case on Windows, SYSTEMROOT is always added if - * missing and not explicitly set to the empty string. + * Output runs the command and returns its standard output. + * Any returned error will usually be of type [*ExitError]. + * If c.Stderr was nil and the returned error is of type + * [*ExitError], Output populates the Stderr field of the + * returned error. */ - env: Array + output(): string|Array + } + interface Cmd { /** - * Dir specifies the working directory of the command. - * If Dir is the empty string, Run runs the command in the - * calling process's current directory. + * CombinedOutput runs the command and returns its combined standard + * output and standard error. */ - dir: string + combinedOutput(): string|Array + } + interface Cmd { /** - * Stdin specifies the process's standard input. - * - * If Stdin is nil, the process reads from the null device (os.DevNull). - * - * If Stdin is an *os.File, the process's standard input is connected - * directly to that file. - * - * Otherwise, during the execution of the command a separate - * goroutine reads from Stdin and delivers that data to the command - * over a pipe. In this case, Wait does not complete until the goroutine - * stops copying, either because it has reached the end of Stdin - * (EOF or a read error), or because writing to the pipe returned an error, - * or because a nonzero WaitDelay was set and expired. + * StdinPipe returns a pipe that will be connected to the command's + * standard input when the command starts. + * The pipe will be closed automatically after [Cmd.Wait] sees the command exit. + * A caller need only call Close to force the pipe to close sooner. + * For example, if the command being run will not exit until standard input + * is closed, the caller must close the pipe. */ - stdin: io.Reader + stdinPipe(): io.WriteCloser + } + interface Cmd { /** - * Stdout and Stderr specify the process's standard output and error. - * - * If either is nil, Run connects the corresponding file descriptor - * to the null device (os.DevNull). - * - * If either is an *os.File, the corresponding output from the process - * is connected directly to that file. - * - * Otherwise, during the execution of the command a separate goroutine - * reads from the process over a pipe and delivers that data to the - * corresponding Writer. In this case, Wait does not complete until the - * goroutine reaches EOF or encounters an error or a nonzero WaitDelay - * expires. + * StdoutPipe returns a pipe that will be connected to the command's + * standard output when the command starts. * - * If Stdout and Stderr are the same writer, and have a type that can - * be compared with ==, at most one goroutine at a time will call Write. + * [Cmd.Wait] will close the pipe after seeing the command exit, so most callers + * need not close the pipe themselves. It is thus incorrect to call Wait + * before all reads from the pipe have completed. + * For the same reason, it is incorrect to call [Cmd.Run] when using StdoutPipe. + * See the example for idiomatic usage. */ - stdout: io.Writer - stderr: io.Writer + stdoutPipe(): io.ReadCloser + } + interface Cmd { /** - * ExtraFiles specifies additional open files to be inherited by the - * new process. It does not include standard input, standard output, or - * standard error. If non-nil, entry i becomes file descriptor 3+i. + * StderrPipe returns a pipe that will be connected to the command's + * standard error when the command starts. * - * ExtraFiles is not supported on Windows. - */ - extraFiles: Array<(os.File | undefined)> - /** - * SysProcAttr holds optional, operating system-specific attributes. - * Run passes it to os.StartProcess as the os.ProcAttr's Sys field. - */ - sysProcAttr?: syscall.SysProcAttr - /** - * Process is the underlying process, once started. + * [Cmd.Wait] will close the pipe after seeing the command exit, so most callers + * need not close the pipe themselves. It is thus incorrect to call Wait + * before all reads from the pipe have completed. + * For the same reason, it is incorrect to use [Cmd.Run] when using StderrPipe. + * See the StdoutPipe example for idiomatic usage. */ - process?: os.Process + stderrPipe(): io.ReadCloser + } + interface Cmd { /** - * ProcessState contains information about an exited process. - * If the process was started successfully, Wait or Run will - * populate its ProcessState when the command completes. + * Environ returns a copy of the environment in which the command would be run + * as it is currently configured. */ - processState?: os.ProcessState - err: Error // LookPath error, if any. - /** - * If Cancel is non-nil, the command must have been created with - * CommandContext and Cancel will be called when the command's - * Context is done. By default, CommandContext sets Cancel to - * call the Kill method on the command's Process. - * - * Typically a custom Cancel will send a signal to the command's - * Process, but it may instead take other actions to initiate cancellation, - * such as closing a stdin or stdout pipe or sending a shutdown request on a - * network socket. - * - * If the command exits with a success status after Cancel is - * called, and Cancel does not return an error equivalent to - * os.ErrProcessDone, then Wait and similar methods will return a non-nil - * error: either an error wrapping the one returned by Cancel, - * or the error from the Context. - * (If the command exits with a non-success status, or Cancel - * returns an error that wraps os.ErrProcessDone, Wait and similar methods - * continue to return the command's usual exit status.) - * - * If Cancel is set to nil, nothing will happen immediately when the command's - * Context is done, but a nonzero WaitDelay will still take effect. That may - * be useful, for example, to work around deadlocks in commands that do not - * support shutdown signals but are expected to always finish quickly. - * - * Cancel will not be called if Start returns a non-nil error. + environ(): Array + } +} + +/** + * Package syntax parses regular expressions into parse trees and compiles + * parse trees into programs. Most clients of regular expressions will use the + * facilities of package [regexp] (such as [regexp.Compile] and [regexp.Match]) instead of this package. + * + * # Syntax + * + * The regular expression syntax understood by this package when parsing with the [Perl] flag is as follows. + * Parts of the syntax can be disabled by passing alternate flags to [Parse]. + * + * Single characters: + * + * ``` + * . any character, possibly including newline (flag s=true) + * [xyz] character class + * [^xyz] negated character class + * \d Perl character class + * \D negated Perl character class + * [[:alpha:]] ASCII character class + * [[:^alpha:]] negated ASCII character class + * \pN Unicode character class (one-letter name) + * \p{Greek} Unicode character class + * \PN negated Unicode character class (one-letter name) + * \P{Greek} negated Unicode character class + * ``` + * + * Composites: + * + * ``` + * xy x followed by y + * x|y x or y (prefer x) + * ``` + * + * Repetitions: + * + * ``` + * x* zero or more x, prefer more + * x+ one or more x, prefer more + * x? zero or one x, prefer one + * x{n,m} n or n+1 or ... or m x, prefer more + * x{n,} n or more x, prefer more + * x{n} exactly n x + * x*? zero or more x, prefer fewer + * x+? one or more x, prefer fewer + * x?? zero or one x, prefer zero + * x{n,m}? n or n+1 or ... or m x, prefer fewer + * x{n,}? n or more x, prefer fewer + * x{n}? exactly n x + * ``` + * + * Implementation restriction: The counting forms x{n,m}, x{n,}, and x{n} + * reject forms that create a minimum or maximum repetition count above 1000. + * Unlimited repetitions are not subject to this restriction. + * + * Grouping: + * + * ``` + * (re) numbered capturing group (submatch) + * (?Pre) named & numbered capturing group (submatch) + * (?re) named & numbered capturing group (submatch) + * (?:re) non-capturing group + * (?flags) set flags within current group; non-capturing + * (?flags:re) set flags during re; non-capturing + * + * Flag syntax is xyz (set) or -xyz (clear) or xy-z (set xy, clear z). The flags are: + * + * i case-insensitive (default false) + * m multi-line mode: ^ and $ match begin/end line in addition to begin/end text (default false) + * s let . match \n (default false) + * U ungreedy: swap meaning of x* and x*?, x+ and x+?, etc (default false) + * ``` + * + * Empty strings: + * + * ``` + * ^ at beginning of text or line (flag m=true) + * $ at end of text (like \z not \Z) or line (flag m=true) + * \A at beginning of text + * \b at ASCII word boundary (\w on one side and \W, \A, or \z on the other) + * \B not at ASCII word boundary + * \z at end of text + * ``` + * + * Escape sequences: + * + * ``` + * \a bell (== \007) + * \f form feed (== \014) + * \t horizontal tab (== \011) + * \n newline (== \012) + * \r carriage return (== \015) + * \v vertical tab character (== \013) + * \* literal *, for any punctuation character * + * \123 octal character code (up to three digits) + * \x7F hex character code (exactly two digits) + * \x{10FFFF} hex character code + * \Q...\E literal text ... even if ... has punctuation + * ``` + * + * Character class elements: + * + * ``` + * x single character + * A-Z character range (inclusive) + * \d Perl character class + * [:foo:] ASCII character class foo + * \p{Foo} Unicode character class Foo + * \pF Unicode character class F (one-letter name) + * ``` + * + * Named character classes as character class elements: + * + * ``` + * [\d] digits (== \d) + * [^\d] not digits (== \D) + * [\D] not digits (== \D) + * [^\D] not not digits (== \d) + * [[:name:]] named ASCII class inside character class (== [:name:]) + * [^[:name:]] named ASCII class inside negated character class (== [:^name:]) + * [\p{Name}] named Unicode property inside character class (== \p{Name}) + * [^\p{Name}] named Unicode property inside negated character class (== \P{Name}) + * ``` + * + * Perl character classes (all ASCII-only): + * + * ``` + * \d digits (== [0-9]) + * \D not digits (== [^0-9]) + * \s whitespace (== [\t\n\f\r ]) + * \S not whitespace (== [^\t\n\f\r ]) + * \w word characters (== [0-9A-Za-z_]) + * \W not word characters (== [^0-9A-Za-z_]) + * ``` + * + * ASCII character classes: + * + * ``` + * [[:alnum:]] alphanumeric (== [0-9A-Za-z]) + * [[:alpha:]] alphabetic (== [A-Za-z]) + * [[:ascii:]] ASCII (== [\x00-\x7F]) + * [[:blank:]] blank (== [\t ]) + * [[:cntrl:]] control (== [\x00-\x1F\x7F]) + * [[:digit:]] digits (== [0-9]) + * [[:graph:]] graphical (== [!-~] == [A-Za-z0-9!"#$%&'()*+,\-./:;<=>?@[\\\]^_`{|}~]) + * [[:lower:]] lower case (== [a-z]) + * [[:print:]] printable (== [ -~] == [ [:graph:]]) + * [[:punct:]] punctuation (== [!-/:-@[-`{-~]) + * [[:space:]] whitespace (== [\t\n\v\f\r ]) + * [[:upper:]] upper case (== [A-Z]) + * [[:word:]] word characters (== [0-9A-Za-z_]) + * [[:xdigit:]] hex digit (== [0-9A-Fa-f]) + * ``` + * + * Unicode character classes are those in [unicode.Categories] and [unicode.Scripts]. + */ +namespace syntax { + /** + * Flags control the behavior of the parser and record information about regexp context. + */ + interface Flags extends Number{} +} + +/** + * Package jwt is a Go implementation of JSON Web Tokens: http://self-issued.info/docs/draft-jones-json-web-token.html + * + * See README.md for more info. + */ +namespace jwt { + /** + * MapClaims is a claims type that uses the map[string]any for JSON + * decoding. This is the default claims type if you don't supply one + */ + interface MapClaims extends _TygojaDict{} + interface MapClaims { + /** + * GetExpirationTime implements the Claims interface. */ - cancel: () => void + getExpirationTime(): (NumericDate) + } + interface MapClaims { /** - * If WaitDelay is non-zero, it bounds the time spent waiting on two sources - * of unexpected delay in Wait: a child process that fails to exit after the - * associated Context is canceled, and a child process that exits but leaves - * its I/O pipes unclosed. - * - * The WaitDelay timer starts when either the associated Context is done or a - * call to Wait observes that the child process has exited, whichever occurs - * first. When the delay has elapsed, the command shuts down the child process - * and/or its I/O pipes. - * - * If the child process has failed to exit — perhaps because it ignored or - * failed to receive a shutdown signal from a Cancel function, or because no - * Cancel function was set — then it will be terminated using os.Process.Kill. - * - * Then, if the I/O pipes communicating with the child process are still open, - * those pipes are closed in order to unblock any goroutines currently blocked - * on Read or Write calls. - * - * If pipes are closed due to WaitDelay, no Cancel call has occurred, - * and the command has otherwise exited with a successful status, Wait and - * similar methods will return ErrWaitDelay instead of nil. - * - * If WaitDelay is zero (the default), I/O pipes will be read until EOF, - * which might not occur until orphaned subprocesses of the command have - * also closed their descriptors for the pipes. + * GetNotBefore implements the Claims interface. */ - waitDelay: time.Duration + getNotBefore(): (NumericDate) } - interface Cmd { + interface MapClaims { /** - * String returns a human-readable description of c. - * It is intended only for debugging. - * In particular, it is not suitable for use as input to a shell. - * The output of String may vary across Go releases. + * GetIssuedAt implements the Claims interface. */ - string(): string + getIssuedAt(): (NumericDate) } - interface Cmd { + interface MapClaims { /** - * Run starts the specified command and waits for it to complete. - * - * The returned error is nil if the command runs, has no problems - * copying stdin, stdout, and stderr, and exits with a zero exit - * status. - * - * If the command starts but does not complete successfully, the error is of - * type [*ExitError]. Other error types may be returned for other situations. - * - * If the calling goroutine has locked the operating system thread - * with [runtime.LockOSThread] and modified any inheritable OS-level - * thread state (for example, Linux or Plan 9 name spaces), the new - * process will inherit the caller's thread state. + * GetAudience implements the Claims interface. */ - run(): void + getAudience(): ClaimStrings } - interface Cmd { + interface MapClaims { /** - * Start starts the specified command but does not wait for it to complete. + * GetIssuer implements the Claims interface. + */ + getIssuer(): string + } + interface MapClaims { + /** + * GetSubject implements the Claims interface. + */ + getSubject(): string + } +} + +namespace hook { + /** + * Event implements [Resolver] and it is intended to be used as a base + * Hook event that you can embed in your custom typed event structs. + * + * Example: + * + * ``` + * type CustomEvent struct { + * hook.Event + * + * SomeField int + * } + * ``` + */ + interface Event { + } + interface Event { + /** + * Next calls the next hook handler. + */ + next(): void + } + /** + * Handler defines a single Hook handler. + * Multiple handlers can share the same id. + * If Id is not explicitly set it will be autogenerated by Hook.Add and Hook.AddHandler. + */ + interface Handler { + /** + * Func defines the handler function to execute. * - * If Start returns successfully, the c.Process field will be set. + * Note that users need to call e.Next() in order to proceed with + * the execution of the hook chain. + */ + func: (_arg0: T) => void + /** + * Id is the unique identifier of the handler. * - * After a successful call to Start the [Cmd.Wait] method must be called in - * order to release associated system resources. + * It could be used later to remove the handler from a hook via [Hook.Remove]. + * + * If missing, an autogenerated value will be assigned when adding + * the handler to a hook. */ - start(): void - } - interface Cmd { + id: string /** - * Wait waits for the command to exit and waits for any copying to - * stdin or copying from stdout or stderr to complete. + * Priority allows changing the default exec priority of the handler within a hook. * - * The command must have been started by [Cmd.Start]. + * If 0, the handler will be executed in the same order it was registered. + */ + priority: number + } + /** + * Hook defines a generic concurrent safe structure for managing event hooks. + * + * When using custom event it must embed the base [hook.Event]. + * + * Example: + * + * ``` + * type CustomEvent struct { + * hook.Event + * SomeField int + * } + * + * h := Hook[*CustomEvent]{} + * + * h.BindFunc(func(e *CustomEvent) error { + * println(e.SomeField) + * + * return e.Next() + * }) + * + * h.Trigger(&CustomEvent{ SomeField: 123 }) + * ``` + */ + interface Hook { + } + interface Hook { + /** + * Bind registers the provided handler to the current hooks queue. * - * The returned error is nil if the command runs, has no problems - * copying stdin, stdout, and stderr, and exits with a zero exit - * status. + * If handler.Id is empty it is updated with autogenerated value. * - * If the command fails to run or doesn't complete successfully, the - * error is of type [*ExitError]. Other error types may be - * returned for I/O problems. + * If a handler from the current hook list has Id matching handler.Id + * then the old handler is replaced with the new one. + */ + bind(handler: Handler): string + } + interface Hook { + /** + * BindFunc is similar to Bind but registers a new handler from just the provided function. * - * If any of c.Stdin, c.Stdout or c.Stderr are not an [*os.File], Wait also waits - * for the respective I/O loop copying to or from the process to complete. + * The registered handler is added with a default 0 priority and the id will be autogenerated. * - * Wait releases any resources associated with the [Cmd]. + * If you want to register a handler with custom priority or id use the [Hook.Bind] method. */ - wait(): void + bindFunc(fn: (e: T) => void): string } - interface Cmd { + interface Hook { /** - * Output runs the command and returns its standard output. - * Any returned error will usually be of type [*ExitError]. - * If c.Stderr was nil, Output populates [ExitError.Stderr]. + * Unbind removes one or many hook handler by their id. */ - output(): string|Array + unbind(...idsToRemove: string[]): void } - interface Cmd { + interface Hook { /** - * CombinedOutput runs the command and returns its combined standard - * output and standard error. + * UnbindAll removes all registered handlers. */ - combinedOutput(): string|Array + unbindAll(): void } - interface Cmd { + interface Hook { /** - * StdinPipe returns a pipe that will be connected to the command's - * standard input when the command starts. - * The pipe will be closed automatically after [Cmd.Wait] sees the command exit. - * A caller need only call Close to force the pipe to close sooner. - * For example, if the command being run will not exit until standard input - * is closed, the caller must close the pipe. + * Length returns to total number of registered hook handlers. */ - stdinPipe(): io.WriteCloser + length(): number } - interface Cmd { + interface Hook { /** - * StdoutPipe returns a pipe that will be connected to the command's - * standard output when the command starts. + * Trigger executes all registered hook handlers one by one + * with the specified event as an argument. * - * [Cmd.Wait] will close the pipe after seeing the command exit, so most callers - * need not close the pipe themselves. It is thus incorrect to call Wait - * before all reads from the pipe have completed. - * For the same reason, it is incorrect to call [Cmd.Run] when using StdoutPipe. - * See the example for idiomatic usage. + * Optionally, this method allows also to register additional one off + * handler funcs that will be temporary appended to the handlers queue. + * + * NB! Each hook handler must call event.Next() in order the hook chain to proceed. */ - stdoutPipe(): io.ReadCloser + trigger(event: T, ...oneOffHandlerFuncs: ((_arg0: T) => void)[]): void } - interface Cmd { + /** + * TaggedHook defines a proxy hook which register handlers that are triggered only + * if the TaggedHook.tags are empty or includes at least one of the event data tag(s). + */ + type _sYVlIEJ = mainHook + interface TaggedHook extends _sYVlIEJ { + } + interface TaggedHook { /** - * StderrPipe returns a pipe that will be connected to the command's - * standard error when the command starts. + * CanTriggerOn checks if the current TaggedHook can be triggered with + * the provided event data tags. * - * [Cmd.Wait] will close the pipe after seeing the command exit, so most callers - * need not close the pipe themselves. It is thus incorrect to call Wait - * before all reads from the pipe have completed. - * For the same reason, it is incorrect to use [Cmd.Run] when using StderrPipe. - * See the StdoutPipe example for idiomatic usage. + * It returns always true if the hook doens't have any tags. */ - stderrPipe(): io.ReadCloser + canTriggerOn(tagsToCheck: Array): boolean } - interface Cmd { + interface TaggedHook { /** - * Environ returns a copy of the environment in which the command would be run - * as it is currently configured. + * Bind registers the provided handler to the current hooks queue. + * + * It is similar to [Hook.Bind] with the difference that the handler + * function is invoked only if the event data tags satisfy h.CanTriggerOn. */ - environ(): Array + bind(handler: Handler): string + } + interface TaggedHook { + /** + * BindFunc registers a new handler with the specified function. + * + * It is similar to [Hook.Bind] with the difference that the handler + * function is invoked only if the event data tags satisfy h.CanTriggerOn. + */ + bindFunc(fn: (e: T) => void): string } } @@ -19116,1500 +19066,28 @@ namespace mailer { } } -namespace subscriptions { - /** - * Broker defines a struct for managing subscriptions clients. - */ - interface Broker { - } - interface Broker { - /** - * Clients returns a shallow copy of all registered clients indexed - * with their connection id. - */ - clients(): _TygojaDict - } - interface Broker { - /** - * ChunkedClients splits the current clients into a chunked slice. - */ - chunkedClients(chunkSize: number): Array> - } - interface Broker { - /** - * TotalClients returns the total number of registered clients. - */ - totalClients(): number - } - interface Broker { - /** - * ClientById finds a registered client by its id. - * - * Returns non-nil error when client with clientId is not registered. - */ - clientById(clientId: string): Client - } - interface Broker { - /** - * Register adds a new client to the broker instance. - */ - register(client: Client): void - } - interface Broker { - /** - * Unregister removes a single client by its id and marks it as discarded. - * - * If client with clientId doesn't exist, this method does nothing. - */ - unregister(clientId: string): void - } - /** - * Client is an interface for a generic subscription client. - */ - interface Client { - [key:string]: any; - /** - * Id Returns the unique id of the client. - */ - id(): string - /** - * Channel returns the client's communication channel. - * - * NB! The channel shouldn't be used after calling Discard(). - */ - channel(): undefined - /** - * Subscriptions returns a shallow copy of the client subscriptions matching the prefixes. - * If no prefix is specified, returns all subscriptions. - */ - subscriptions(...prefixes: string[]): _TygojaDict - /** - * Subscribe subscribes the client to the provided subscriptions list. - * - * Each subscription can also have "options" (json serialized SubscriptionOptions) as query parameter. - * - * Example: - * - * ``` - * Subscribe( - * "subscriptionA", - * `subscriptionB?options={"query":{"a":1},"headers":{"x_token":"abc"}}`, - * ) - * ``` - */ - subscribe(...subs: string[]): void - /** - * Unsubscribe unsubscribes the client from the provided subscriptions list. - */ - unsubscribe(...subs: string[]): void - /** - * HasSubscription checks if the client is subscribed to `sub`. - */ - hasSubscription(sub: string): boolean - /** - * Set stores any value to the client's context. - */ - set(key: string, value: any): void - /** - * Unset removes a single value from the client's context. - */ - unset(key: string): void - /** - * Get retrieves the key value from the client's context. - */ - get(key: string): any - /** - * Discard marks the client as "discarded" (and closes its channel), - * meaning that it shouldn't be used anymore for sending new messages. - * - * It is safe to call Discard() multiple times. - */ - discard(): void - /** - * IsDiscarded indicates whether the client has been "discarded" - * and should no longer be used. - */ - isDiscarded(): boolean - /** - * Send sends the specified message to the client's channel (if not discarded). - */ - send(m: Message): void - } - /** - * Message defines a client's channel data. - */ - interface Message { - name: string - data: string|Array - } - interface Message { - /** - * WriteSSE writes the current message in a SSE format into the provided writer. - * - * For example, writing to a router.Event: - * - * ``` - * m := Message{Name: "users/create", Data: []byte{...}} - * m.Write(e.Response, "yourEventId") - * e.Flush() - * ``` - */ - writeSSE(w: io.Writer, eventId: string): void - } -} - /** - * Package cron implements a crontab-like service to execute and schedule - * repeative tasks/jobs. + * Package slog provides structured logging, + * in which log records include a message, + * a severity level, and various other attributes + * expressed as key-value pairs. * - * Example: + * It defines a type, [Logger], + * which provides several methods (such as [Logger.Info] and [Logger.Error]) + * for reporting events of interest. + * + * Each Logger is associated with a [Handler]. + * A Logger output method creates a [Record] from the method arguments + * and passes it to the Handler, which decides how to handle it. + * There is a default Logger accessible through top-level functions + * (such as [Info] and [Error]) that call the corresponding Logger methods. + * + * A log record consists of a time, a level, a message, and a set of key-value + * pairs, where the keys are strings and the values may be of any type. + * As an example, * * ``` - * c := cron.New() - * c.MustAdd("dailyReport", "0 0 * * *", func() { ... }) - * c.Start() - * ``` - */ -namespace cron { - /** - * Cron is a crontab-like struct for tasks/jobs scheduling. - */ - interface Cron { - } - interface Cron { - /** - * SetInterval changes the current cron tick interval - * (it usually should be >= 1 minute). - */ - setInterval(d: time.Duration): void - } - interface Cron { - /** - * SetTimezone changes the current cron tick timezone. - */ - setTimezone(l: time.Location): void - } - interface Cron { - /** - * MustAdd is similar to Add() but panic on failure. - */ - mustAdd(jobId: string, cronExpr: string, run: () => void): void - } - interface Cron { - /** - * Add registers a single cron job. - * - * If there is already a job with the provided id, then the old job - * will be replaced with the new one. - * - * cronExpr is a regular cron expression, eg. "0 *\/3 * * *" (aka. at minute 0 past every 3rd hour). - * Check cron.NewSchedule() for the supported tokens. - */ - add(jobId: string, cronExpr: string, fn: () => void): void - } - interface Cron { - /** - * Remove removes a single cron job by its id. - */ - remove(jobId: string): void - } - interface Cron { - /** - * RemoveAll removes all registered cron jobs. - */ - removeAll(): void - } - interface Cron { - /** - * Total returns the current total number of registered cron jobs. - */ - total(): number - } - interface Cron { - /** - * Jobs returns a shallow copy of the currently registered cron jobs. - */ - jobs(): Array<(Job | undefined)> - } - interface Cron { - /** - * Stop stops the current cron ticker (if not already). - * - * You can resume the ticker by calling Start(). - */ - stop(): void - } - interface Cron { - /** - * Start starts the cron ticker. - * - * Calling Start() on already started cron will restart the ticker. - */ - start(): void - } - interface Cron { - /** - * HasStarted checks whether the current Cron ticker has been started. - */ - hasStarted(): boolean - } -} - -/** - * Package cobra is a commander providing a simple interface to create powerful modern CLI interfaces. - * In addition to providing an interface, Cobra simultaneously provides a controller to organize your application code. - */ -namespace cobra { - interface Command { - /** - * GenBashCompletion generates bash completion file and writes to the passed writer. - */ - genBashCompletion(w: io.Writer): void - } - interface Command { - /** - * GenBashCompletionFile generates bash completion file. - */ - genBashCompletionFile(filename: string): void - } - interface Command { - /** - * GenBashCompletionFileV2 generates Bash completion version 2. - */ - genBashCompletionFileV2(filename: string, includeDesc: boolean): void - } - interface Command { - /** - * GenBashCompletionV2 generates Bash completion file version 2 - * and writes it to the passed writer. - */ - genBashCompletionV2(w: io.Writer, includeDesc: boolean): void - } - // @ts-ignore - import flag = pflag - /** - * Command is just that, a command for your application. - * E.g. 'go run ...' - 'run' is the command. Cobra requires - * you to define the usage and description as part of your command - * definition to ensure usability. - */ - interface Command { - /** - * Use is the one-line usage message. - * Recommended syntax is as follows: - * ``` - * [ ] identifies an optional argument. Arguments that are not enclosed in brackets are required. - * ... indicates that you can specify multiple values for the previous argument. - * | indicates mutually exclusive information. You can use the argument to the left of the separator or the - * argument to the right of the separator. You cannot use both arguments in a single use of the command. - * { } delimits a set of mutually exclusive arguments when one of the arguments is required. If the arguments are - * optional, they are enclosed in brackets ([ ]). - * ``` - * Example: add [-F file | -D dir]... [-f format] profile - */ - use: string - /** - * Aliases is an array of aliases that can be used instead of the first word in Use. - */ - aliases: Array - /** - * SuggestFor is an array of command names for which this command will be suggested - - * similar to aliases but only suggests. - */ - suggestFor: Array - /** - * Short is the short description shown in the 'help' output. - */ - short: string - /** - * The group id under which this subcommand is grouped in the 'help' output of its parent. - */ - groupID: string - /** - * Long is the long message shown in the 'help ' output. - */ - long: string - /** - * Example is examples of how to use the command. - */ - example: string - /** - * ValidArgs is list of all valid non-flag arguments that are accepted in shell completions - */ - validArgs: Array - /** - * ValidArgsFunction is an optional function that provides valid non-flag arguments for shell completion. - * It is a dynamic version of using ValidArgs. - * Only one of ValidArgs and ValidArgsFunction can be used for a command. - */ - validArgsFunction: CompletionFunc - /** - * Expected arguments - */ - args: PositionalArgs - /** - * ArgAliases is List of aliases for ValidArgs. - * These are not suggested to the user in the shell completion, - * but accepted if entered manually. - */ - argAliases: Array - /** - * BashCompletionFunction is custom bash functions used by the legacy bash autocompletion generator. - * For portability with other shells, it is recommended to instead use ValidArgsFunction - */ - bashCompletionFunction: string - /** - * Deprecated defines, if this command is deprecated and should print this string when used. - */ - deprecated: string - /** - * Annotations are key/value pairs that can be used by applications to identify or - * group commands or set special options. - */ - annotations: _TygojaDict - /** - * Version defines the version for this command. If this value is non-empty and the command does not - * define a "version" flag, a "version" boolean flag will be added to the command and, if specified, - * will print content of the "Version" variable. A shorthand "v" flag will also be added if the - * command does not define one. - */ - version: string - /** - * The *Run functions are executed in the following order: - * ``` - * * PersistentPreRun() - * * PreRun() - * * Run() - * * PostRun() - * * PersistentPostRun() - * ``` - * All functions get the same args, the arguments after the command name. - * The *PreRun and *PostRun functions will only be executed if the Run function of the current - * command has been declared. - * - * PersistentPreRun: children of this command will inherit and execute. - */ - persistentPreRun: (cmd: Command, args: Array) => void - /** - * PersistentPreRunE: PersistentPreRun but returns an error. - */ - persistentPreRunE: (cmd: Command, args: Array) => void - /** - * PreRun: children of this command will not inherit. - */ - preRun: (cmd: Command, args: Array) => void - /** - * PreRunE: PreRun but returns an error. - */ - preRunE: (cmd: Command, args: Array) => void - /** - * Run: Typically the actual work function. Most commands will only implement this. - */ - run: (cmd: Command, args: Array) => void - /** - * RunE: Run but returns an error. - */ - runE: (cmd: Command, args: Array) => void - /** - * PostRun: run after the Run command. - */ - postRun: (cmd: Command, args: Array) => void - /** - * PostRunE: PostRun but returns an error. - */ - postRunE: (cmd: Command, args: Array) => void - /** - * PersistentPostRun: children of this command will inherit and execute after PostRun. - */ - persistentPostRun: (cmd: Command, args: Array) => void - /** - * PersistentPostRunE: PersistentPostRun but returns an error. - */ - persistentPostRunE: (cmd: Command, args: Array) => void - /** - * FParseErrWhitelist flag parse errors to be ignored - */ - fParseErrWhitelist: FParseErrWhitelist - /** - * CompletionOptions is a set of options to control the handling of shell completion - */ - completionOptions: CompletionOptions - /** - * TraverseChildren parses flags on all parents before executing child command. - */ - traverseChildren: boolean - /** - * Hidden defines, if this command is hidden and should NOT show up in the list of available commands. - */ - hidden: boolean - /** - * SilenceErrors is an option to quiet errors down stream. - */ - silenceErrors: boolean - /** - * SilenceUsage is an option to silence usage when an error occurs. - */ - silenceUsage: boolean - /** - * DisableFlagParsing disables the flag parsing. - * If this is true all flags will be passed to the command as arguments. - */ - disableFlagParsing: boolean - /** - * DisableAutoGenTag defines, if gen tag ("Auto generated by spf13/cobra...") - * will be printed by generating docs for this command. - */ - disableAutoGenTag: boolean - /** - * DisableFlagsInUseLine will disable the addition of [flags] to the usage - * line of a command when printing help or generating docs - */ - disableFlagsInUseLine: boolean - /** - * DisableSuggestions disables the suggestions based on Levenshtein distance - * that go along with 'unknown command' messages. - */ - disableSuggestions: boolean - /** - * SuggestionsMinimumDistance defines minimum levenshtein distance to display suggestions. - * Must be > 0. - */ - suggestionsMinimumDistance: number - } - interface Command { - /** - * Context returns underlying command context. If command was executed - * with ExecuteContext or the context was set with SetContext, the - * previously set context will be returned. Otherwise, nil is returned. - * - * Notice that a call to Execute and ExecuteC will replace a nil context of - * a command with a context.Background, so a background context will be - * returned by Context after one of these functions has been called. - */ - context(): context.Context - } - interface Command { - /** - * SetContext sets context for the command. This context will be overwritten by - * Command.ExecuteContext or Command.ExecuteContextC. - */ - setContext(ctx: context.Context): void - } - interface Command { - /** - * SetArgs sets arguments for the command. It is set to os.Args[1:] by default, if desired, can be overridden - * particularly useful when testing. - */ - setArgs(a: Array): void - } - interface Command { - /** - * SetOutput sets the destination for usage and error messages. - * If output is nil, os.Stderr is used. - * - * Deprecated: Use SetOut and/or SetErr instead - */ - setOutput(output: io.Writer): void - } - interface Command { - /** - * SetOut sets the destination for usage messages. - * If newOut is nil, os.Stdout is used. - */ - setOut(newOut: io.Writer): void - } - interface Command { - /** - * SetErr sets the destination for error messages. - * If newErr is nil, os.Stderr is used. - */ - setErr(newErr: io.Writer): void - } - interface Command { - /** - * SetIn sets the source for input data - * If newIn is nil, os.Stdin is used. - */ - setIn(newIn: io.Reader): void - } - interface Command { - /** - * SetUsageFunc sets usage function. Usage can be defined by application. - */ - setUsageFunc(f: (_arg0: Command) => void): void - } - interface Command { - /** - * SetUsageTemplate sets usage template. Can be defined by Application. - */ - setUsageTemplate(s: string): void - } - interface Command { - /** - * SetFlagErrorFunc sets a function to generate an error when flag parsing - * fails. - */ - setFlagErrorFunc(f: (_arg0: Command, _arg1: Error) => void): void - } - interface Command { - /** - * SetHelpFunc sets help function. Can be defined by Application. - */ - setHelpFunc(f: (_arg0: Command, _arg1: Array) => void): void - } - interface Command { - /** - * SetHelpCommand sets help command. - */ - setHelpCommand(cmd: Command): void - } - interface Command { - /** - * SetHelpCommandGroupID sets the group id of the help command. - */ - setHelpCommandGroupID(groupID: string): void - } - interface Command { - /** - * SetCompletionCommandGroupID sets the group id of the completion command. - */ - setCompletionCommandGroupID(groupID: string): void - } - interface Command { - /** - * SetHelpTemplate sets help template to be used. Application can use it to set custom template. - */ - setHelpTemplate(s: string): void - } - interface Command { - /** - * SetVersionTemplate sets version template to be used. Application can use it to set custom template. - */ - setVersionTemplate(s: string): void - } - interface Command { - /** - * SetErrPrefix sets error message prefix to be used. Application can use it to set custom prefix. - */ - setErrPrefix(s: string): void - } - interface Command { - /** - * SetGlobalNormalizationFunc sets a normalization function to all flag sets and also to child commands. - * The user should not have a cyclic dependency on commands. - */ - setGlobalNormalizationFunc(n: (f: any, name: string) => any): void - } - interface Command { - /** - * OutOrStdout returns output to stdout. - */ - outOrStdout(): io.Writer - } - interface Command { - /** - * OutOrStderr returns output to stderr - */ - outOrStderr(): io.Writer - } - interface Command { - /** - * ErrOrStderr returns output to stderr - */ - errOrStderr(): io.Writer - } - interface Command { - /** - * InOrStdin returns input to stdin - */ - inOrStdin(): io.Reader - } - interface Command { - /** - * UsageFunc returns either the function set by SetUsageFunc for this command - * or a parent, or it returns a default usage function. - */ - usageFunc(): (_arg0: Command) => void - } - interface Command { - /** - * Usage puts out the usage for the command. - * Used when a user provides invalid input. - * Can be defined by user by overriding UsageFunc. - */ - usage(): void - } - interface Command { - /** - * HelpFunc returns either the function set by SetHelpFunc for this command - * or a parent, or it returns a function with default help behavior. - */ - helpFunc(): (_arg0: Command, _arg1: Array) => void - } - interface Command { - /** - * Help puts out the help for the command. - * Used when a user calls help [command]. - * Can be defined by user by overriding HelpFunc. - */ - help(): void - } - interface Command { - /** - * UsageString returns usage string. - */ - usageString(): string - } - interface Command { - /** - * FlagErrorFunc returns either the function set by SetFlagErrorFunc for this - * command or a parent, or it returns a function which returns the original - * error. - */ - flagErrorFunc(): (_arg0: Command, _arg1: Error) => void - } - interface Command { - /** - * UsagePadding return padding for the usage. - */ - usagePadding(): number - } - interface Command { - /** - * CommandPathPadding return padding for the command path. - */ - commandPathPadding(): number - } - interface Command { - /** - * NamePadding returns padding for the name. - */ - namePadding(): number - } - interface Command { - /** - * UsageTemplate returns usage template for the command. - * This function is kept for backwards-compatibility reasons. - */ - usageTemplate(): string - } - interface Command { - /** - * HelpTemplate return help template for the command. - * This function is kept for backwards-compatibility reasons. - */ - helpTemplate(): string - } - interface Command { - /** - * VersionTemplate return version template for the command. - * This function is kept for backwards-compatibility reasons. - */ - versionTemplate(): string - } - interface Command { - /** - * ErrPrefix return error message prefix for the command - */ - errPrefix(): string - } - interface Command { - /** - * Find the target command given the args and command tree - * Meant to be run on the highest node. Only searches down. - */ - find(args: Array): [(Command), Array] - } - interface Command { - /** - * Traverse the command tree to find the command, and parse args for - * each parent. - */ - traverse(args: Array): [(Command), Array] - } - interface Command { - /** - * SuggestionsFor provides suggestions for the typedName. - */ - suggestionsFor(typedName: string): Array - } - interface Command { - /** - * VisitParents visits all parents of the command and invokes fn on each parent. - */ - visitParents(fn: (_arg0: Command) => void): void - } - interface Command { - /** - * Root finds root command. - */ - root(): (Command) - } - interface Command { - /** - * ArgsLenAtDash will return the length of c.Flags().Args at the moment - * when a -- was found during args parsing. - */ - argsLenAtDash(): number - } - interface Command { - /** - * ExecuteContext is the same as Execute(), but sets the ctx on the command. - * Retrieve ctx by calling cmd.Context() inside your *Run lifecycle or ValidArgs - * functions. - */ - executeContext(ctx: context.Context): void - } - interface Command { - /** - * Execute uses the args (os.Args[1:] by default) - * and run through the command tree finding appropriate matches - * for commands and then corresponding flags. - */ - execute(): void - } - interface Command { - /** - * ExecuteContextC is the same as ExecuteC(), but sets the ctx on the command. - * Retrieve ctx by calling cmd.Context() inside your *Run lifecycle or ValidArgs - * functions. - */ - executeContextC(ctx: context.Context): (Command) - } - interface Command { - /** - * ExecuteC executes the command. - */ - executeC(): (Command) - } - interface Command { - validateArgs(args: Array): void - } - interface Command { - /** - * ValidateRequiredFlags validates all required flags are present and returns an error otherwise - */ - validateRequiredFlags(): void - } - interface Command { - /** - * InitDefaultHelpFlag adds default help flag to c. - * It is called automatically by executing the c or by calling help and usage. - * If c already has help flag, it will do nothing. - */ - initDefaultHelpFlag(): void - } - interface Command { - /** - * InitDefaultVersionFlag adds default version flag to c. - * It is called automatically by executing the c. - * If c already has a version flag, it will do nothing. - * If c.Version is empty, it will do nothing. - */ - initDefaultVersionFlag(): void - } - interface Command { - /** - * InitDefaultHelpCmd adds default help command to c. - * It is called automatically by executing the c or by calling help and usage. - * If c already has help command or c has no subcommands, it will do nothing. - */ - initDefaultHelpCmd(): void - } - interface Command { - /** - * ResetCommands delete parent, subcommand and help command from c. - */ - resetCommands(): void - } - interface Command { - /** - * Commands returns a sorted slice of child commands. - */ - commands(): Array<(Command | undefined)> - } - interface Command { - /** - * AddCommand adds one or more commands to this parent command. - */ - addCommand(...cmds: (Command | undefined)[]): void - } - interface Command { - /** - * Groups returns a slice of child command groups. - */ - groups(): Array<(Group | undefined)> - } - interface Command { - /** - * AllChildCommandsHaveGroup returns if all subcommands are assigned to a group - */ - allChildCommandsHaveGroup(): boolean - } - interface Command { - /** - * ContainsGroup return if groupID exists in the list of command groups. - */ - containsGroup(groupID: string): boolean - } - interface Command { - /** - * AddGroup adds one or more command groups to this parent command. - */ - addGroup(...groups: (Group | undefined)[]): void - } - interface Command { - /** - * RemoveCommand removes one or more commands from a parent command. - */ - removeCommand(...cmds: (Command | undefined)[]): void - } - interface Command { - /** - * Print is a convenience method to Print to the defined output, fallback to Stderr if not set. - */ - print(...i: { - }[]): void - } - interface Command { - /** - * Println is a convenience method to Println to the defined output, fallback to Stderr if not set. - */ - println(...i: { - }[]): void - } - interface Command { - /** - * Printf is a convenience method to Printf to the defined output, fallback to Stderr if not set. - */ - printf(format: string, ...i: { - }[]): void - } - interface Command { - /** - * PrintErr is a convenience method to Print to the defined Err output, fallback to Stderr if not set. - */ - printErr(...i: { - }[]): void - } - interface Command { - /** - * PrintErrln is a convenience method to Println to the defined Err output, fallback to Stderr if not set. - */ - printErrln(...i: { - }[]): void - } - interface Command { - /** - * PrintErrf is a convenience method to Printf to the defined Err output, fallback to Stderr if not set. - */ - printErrf(format: string, ...i: { - }[]): void - } - interface Command { - /** - * CommandPath returns the full path to this command. - */ - commandPath(): string - } - interface Command { - /** - * DisplayName returns the name to display in help text. Returns command Name() - * If CommandDisplayNameAnnoation is not set - */ - displayName(): string - } - interface Command { - /** - * UseLine puts out the full usage for a given command (including parents). - */ - useLine(): string - } - interface Command { - /** - * DebugFlags used to determine which flags have been assigned to which commands - * and which persist. - */ - debugFlags(): void - } - interface Command { - /** - * Name returns the command's name: the first word in the use line. - */ - name(): string - } - interface Command { - /** - * HasAlias determines if a given string is an alias of the command. - */ - hasAlias(s: string): boolean - } - interface Command { - /** - * CalledAs returns the command name or alias that was used to invoke - * this command or an empty string if the command has not been called. - */ - calledAs(): string - } - interface Command { - /** - * NameAndAliases returns a list of the command name and all aliases - */ - nameAndAliases(): string - } - interface Command { - /** - * HasExample determines if the command has example. - */ - hasExample(): boolean - } - interface Command { - /** - * Runnable determines if the command is itself runnable. - */ - runnable(): boolean - } - interface Command { - /** - * HasSubCommands determines if the command has children commands. - */ - hasSubCommands(): boolean - } - interface Command { - /** - * IsAvailableCommand determines if a command is available as a non-help command - * (this includes all non deprecated/hidden commands). - */ - isAvailableCommand(): boolean - } - interface Command { - /** - * IsAdditionalHelpTopicCommand determines if a command is an additional - * help topic command; additional help topic command is determined by the - * fact that it is NOT runnable/hidden/deprecated, and has no sub commands that - * are runnable/hidden/deprecated. - * Concrete example: https://github.com/spf13/cobra/issues/393#issuecomment-282741924. - */ - isAdditionalHelpTopicCommand(): boolean - } - interface Command { - /** - * HasHelpSubCommands determines if a command has any available 'help' sub commands - * that need to be shown in the usage/help default template under 'additional help - * topics'. - */ - hasHelpSubCommands(): boolean - } - interface Command { - /** - * HasAvailableSubCommands determines if a command has available sub commands that - * need to be shown in the usage/help default template under 'available commands'. - */ - hasAvailableSubCommands(): boolean - } - interface Command { - /** - * HasParent determines if the command is a child command. - */ - hasParent(): boolean - } - interface Command { - /** - * GlobalNormalizationFunc returns the global normalization function or nil if it doesn't exist. - */ - globalNormalizationFunc(): (f: any, name: string) => any - } - interface Command { - /** - * Flags returns the complete FlagSet that applies - * to this command (local and persistent declared here and by all parents). - */ - flags(): (any) - } - interface Command { - /** - * LocalNonPersistentFlags are flags specific to this command which will NOT persist to subcommands. - * This function does not modify the flags of the current command, it's purpose is to return the current state. - */ - localNonPersistentFlags(): (any) - } - interface Command { - /** - * LocalFlags returns the local FlagSet specifically set in the current command. - * This function does not modify the flags of the current command, it's purpose is to return the current state. - */ - localFlags(): (any) - } - interface Command { - /** - * InheritedFlags returns all flags which were inherited from parent commands. - * This function does not modify the flags of the current command, it's purpose is to return the current state. - */ - inheritedFlags(): (any) - } - interface Command { - /** - * NonInheritedFlags returns all flags which were not inherited from parent commands. - * This function does not modify the flags of the current command, it's purpose is to return the current state. - */ - nonInheritedFlags(): (any) - } - interface Command { - /** - * PersistentFlags returns the persistent FlagSet specifically set in the current command. - */ - persistentFlags(): (any) - } - interface Command { - /** - * ResetFlags deletes all flags from command. - */ - resetFlags(): void - } - interface Command { - /** - * HasFlags checks if the command contains any flags (local plus persistent from the entire structure). - */ - hasFlags(): boolean - } - interface Command { - /** - * HasPersistentFlags checks if the command contains persistent flags. - */ - hasPersistentFlags(): boolean - } - interface Command { - /** - * HasLocalFlags checks if the command has flags specifically declared locally. - */ - hasLocalFlags(): boolean - } - interface Command { - /** - * HasInheritedFlags checks if the command has flags inherited from its parent command. - */ - hasInheritedFlags(): boolean - } - interface Command { - /** - * HasAvailableFlags checks if the command contains any flags (local plus persistent from the entire - * structure) which are not hidden or deprecated. - */ - hasAvailableFlags(): boolean - } - interface Command { - /** - * HasAvailablePersistentFlags checks if the command contains persistent flags which are not hidden or deprecated. - */ - hasAvailablePersistentFlags(): boolean - } - interface Command { - /** - * HasAvailableLocalFlags checks if the command has flags specifically declared locally which are not hidden - * or deprecated. - */ - hasAvailableLocalFlags(): boolean - } - interface Command { - /** - * HasAvailableInheritedFlags checks if the command has flags inherited from its parent command which are - * not hidden or deprecated. - */ - hasAvailableInheritedFlags(): boolean - } - interface Command { - /** - * Flag climbs up the command tree looking for matching flag. - */ - flag(name: string): (any) - } - interface Command { - /** - * ParseFlags parses persistent flag tree and local flags. - */ - parseFlags(args: Array): void - } - interface Command { - /** - * Parent returns a commands parent command. - */ - parent(): (Command) - } - interface Command { - /** - * RegisterFlagCompletionFunc should be called to register a function to provide completion for a flag. - * - * You can use pre-defined completion functions such as [FixedCompletions] or [NoFileCompletions], - * or you can define your own. - */ - registerFlagCompletionFunc(flagName: string, f: CompletionFunc): void - } - interface Command { - /** - * GetFlagCompletionFunc returns the completion function for the given flag of the command, if available. - */ - getFlagCompletionFunc(flagName: string): [CompletionFunc, boolean] - } - interface Command { - /** - * InitDefaultCompletionCmd adds a default 'completion' command to c. - * This function will do nothing if any of the following is true: - * 1- the feature has been explicitly disabled by the program, - * 2- c has no subcommands (to avoid creating one), - * 3- c already has a 'completion' command provided by the program. - */ - initDefaultCompletionCmd(...args: string[]): void - } - interface Command { - /** - * GenFishCompletion generates fish completion file and writes to the passed writer. - */ - genFishCompletion(w: io.Writer, includeDesc: boolean): void - } - interface Command { - /** - * GenFishCompletionFile generates fish completion file. - */ - genFishCompletionFile(filename: string, includeDesc: boolean): void - } - interface Command { - /** - * MarkFlagsRequiredTogether marks the given flags with annotations so that Cobra errors - * if the command is invoked with a subset (but not all) of the given flags. - */ - markFlagsRequiredTogether(...flagNames: string[]): void - } - interface Command { - /** - * MarkFlagsOneRequired marks the given flags with annotations so that Cobra errors - * if the command is invoked without at least one flag from the given set of flags. - */ - markFlagsOneRequired(...flagNames: string[]): void - } - interface Command { - /** - * MarkFlagsMutuallyExclusive marks the given flags with annotations so that Cobra errors - * if the command is invoked with more than one flag from the given set of flags. - */ - markFlagsMutuallyExclusive(...flagNames: string[]): void - } - interface Command { - /** - * ValidateFlagGroups validates the mutuallyExclusive/oneRequired/requiredAsGroup logic and returns the - * first error encountered. - */ - validateFlagGroups(): void - } - interface Command { - /** - * GenPowerShellCompletionFile generates powershell completion file without descriptions. - */ - genPowerShellCompletionFile(filename: string): void - } - interface Command { - /** - * GenPowerShellCompletion generates powershell completion file without descriptions - * and writes it to the passed writer. - */ - genPowerShellCompletion(w: io.Writer): void - } - interface Command { - /** - * GenPowerShellCompletionFileWithDesc generates powershell completion file with descriptions. - */ - genPowerShellCompletionFileWithDesc(filename: string): void - } - interface Command { - /** - * GenPowerShellCompletionWithDesc generates powershell completion file with descriptions - * and writes it to the passed writer. - */ - genPowerShellCompletionWithDesc(w: io.Writer): void - } - interface Command { - /** - * MarkFlagRequired instructs the various shell completion implementations to - * prioritize the named flag when performing completion, - * and causes your command to report an error if invoked without the flag. - */ - markFlagRequired(name: string): void - } - interface Command { - /** - * MarkPersistentFlagRequired instructs the various shell completion implementations to - * prioritize the named persistent flag when performing completion, - * and causes your command to report an error if invoked without the flag. - */ - markPersistentFlagRequired(name: string): void - } - interface Command { - /** - * MarkFlagFilename instructs the various shell completion implementations to - * limit completions for the named flag to the specified file extensions. - */ - markFlagFilename(name: string, ...extensions: string[]): void - } - interface Command { - /** - * MarkFlagCustom adds the BashCompCustom annotation to the named flag, if it exists. - * The bash completion script will call the bash function f for the flag. - * - * This will only work for bash completion. - * It is recommended to instead use c.RegisterFlagCompletionFunc(...) which allows - * to register a Go function which will work across all shells. - */ - markFlagCustom(name: string, f: string): void - } - interface Command { - /** - * MarkPersistentFlagFilename instructs the various shell completion - * implementations to limit completions for the named persistent flag to the - * specified file extensions. - */ - markPersistentFlagFilename(name: string, ...extensions: string[]): void - } - interface Command { - /** - * MarkFlagDirname instructs the various shell completion implementations to - * limit completions for the named flag to directory names. - */ - markFlagDirname(name: string): void - } - interface Command { - /** - * MarkPersistentFlagDirname instructs the various shell completion - * implementations to limit completions for the named persistent flag to - * directory names. - */ - markPersistentFlagDirname(name: string): void - } - interface Command { - /** - * GenZshCompletionFile generates zsh completion file including descriptions. - */ - genZshCompletionFile(filename: string): void - } - interface Command { - /** - * GenZshCompletion generates zsh completion file including descriptions - * and writes it to the passed writer. - */ - genZshCompletion(w: io.Writer): void - } - interface Command { - /** - * GenZshCompletionFileNoDesc generates zsh completion file without descriptions. - */ - genZshCompletionFileNoDesc(filename: string): void - } - interface Command { - /** - * GenZshCompletionNoDesc generates zsh completion file without descriptions - * and writes it to the passed writer. - */ - genZshCompletionNoDesc(w: io.Writer): void - } - interface Command { - /** - * MarkZshCompPositionalArgumentFile only worked for zsh and its behavior was - * not consistent with Bash completion. It has therefore been disabled. - * Instead, when no other completion is specified, file completion is done by - * default for every argument. One can disable file completion on a per-argument - * basis by using ValidArgsFunction and ShellCompDirectiveNoFileComp. - * To achieve file extension filtering, one can use ValidArgsFunction and - * ShellCompDirectiveFilterFileExt. - * - * Deprecated - */ - markZshCompPositionalArgumentFile(argPosition: number, ...patterns: string[]): void - } - interface Command { - /** - * MarkZshCompPositionalArgumentWords only worked for zsh. It has therefore - * been disabled. - * To achieve the same behavior across all shells, one can use - * ValidArgs (for the first argument only) or ValidArgsFunction for - * any argument (can include the first one also). - * - * Deprecated - */ - markZshCompPositionalArgumentWords(argPosition: number, ...words: string[]): void - } -} - -namespace auth { - /** - * Provider defines a common interface for an OAuth2 client. - */ - interface Provider { - [key:string]: any; - /** - * Context returns the context associated with the provider (if any). - */ - context(): context.Context - /** - * SetContext assigns the specified context to the current provider. - */ - setContext(ctx: context.Context): void - /** - * PKCE indicates whether the provider can use the PKCE flow. - */ - pkce(): boolean - /** - * SetPKCE toggles the state whether the provider can use the PKCE flow or not. - */ - setPKCE(enable: boolean): void - /** - * DisplayName usually returns provider name as it is officially written - * and it could be used directly in the UI. - */ - displayName(): string - /** - * SetDisplayName sets the provider's display name. - */ - setDisplayName(displayName: string): void - /** - * Scopes returns the provider access permissions that will be requested. - */ - scopes(): Array - /** - * SetScopes sets the provider access permissions that will be requested later. - */ - setScopes(scopes: Array): void - /** - * ClientId returns the provider client's app ID. - */ - clientId(): string - /** - * SetClientId sets the provider client's ID. - */ - setClientId(clientId: string): void - /** - * ClientSecret returns the provider client's app secret. - */ - clientSecret(): string - /** - * SetClientSecret sets the provider client's app secret. - */ - setClientSecret(secret: string): void - /** - * RedirectURL returns the end address to redirect the user - * going through the OAuth flow. - */ - redirectURL(): string - /** - * SetRedirectURL sets the provider's RedirectURL. - */ - setRedirectURL(url: string): void - /** - * AuthURL returns the provider's authorization service url. - */ - authURL(): string - /** - * SetAuthURL sets the provider's AuthURL. - */ - setAuthURL(url: string): void - /** - * TokenURL returns the provider's token exchange service url. - */ - tokenURL(): string - /** - * SetTokenURL sets the provider's TokenURL. - */ - setTokenURL(url: string): void - /** - * UserInfoURL returns the provider's user info api url. - */ - userInfoURL(): string - /** - * SetUserInfoURL sets the provider's UserInfoURL. - */ - setUserInfoURL(url: string): void - /** - * Extra returns a shallow copy of any custom config data - * that the provider may be need. - */ - extra(): _TygojaDict - /** - * SetExtra updates the provider's custom config data. - */ - setExtra(data: _TygojaDict): void - /** - * Client returns an http client using the provided token. - */ - client(token: oauth2.Token): (any) - /** - * BuildAuthURL returns a URL to the provider's consent page - * that asks for permissions for the required scopes explicitly. - */ - buildAuthURL(state: string, ...opts: oauth2.AuthCodeOption[]): string - /** - * FetchToken converts an authorization code to token. - */ - fetchToken(code: string, ...opts: oauth2.AuthCodeOption[]): (oauth2.Token) - /** - * FetchRawUserInfo requests and marshalizes into `result` the - * the OAuth user api response. - */ - fetchRawUserInfo(token: oauth2.Token): string|Array - /** - * FetchAuthUser is similar to FetchRawUserInfo, but normalizes and - * marshalizes the user api response into a standardized AuthUser struct. - */ - fetchAuthUser(token: oauth2.Token): (AuthUser) - } - /** - * AuthUser defines a standardized OAuth2 user data structure. - */ - interface AuthUser { - expiry: types.DateTime - rawUser: _TygojaDict - id: string - name: string - username: string - email: string - avatarURL: string - accessToken: string - refreshToken: string - /** - * @todo - * deprecated: use AvatarURL instead - * AvatarUrl will be removed after dropping v0.22 support - */ - avatarUrl: string - } - interface AuthUser { - /** - * MarshalJSON implements the [json.Marshaler] interface. - * - * @todo remove after dropping v0.22 support - */ - marshalJSON(): string|Array - } -} - -/** - * Package slog provides structured logging, - * in which log records include a message, - * a severity level, and various other attributes - * expressed as key-value pairs. - * - * It defines a type, [Logger], - * which provides several methods (such as [Logger.Info] and [Logger.Error]) - * for reporting events of interest. - * - * Each Logger is associated with a [Handler]. - * A Logger output method creates a [Record] from the method arguments - * and passes it to the Handler, which decides how to handle it. - * There is a default Logger accessible through top-level functions - * (such as [Info] and [Error]) that call the corresponding Logger methods. - * - * A log record consists of a time, a level, a message, and a set of key-value - * pairs, where the keys are strings and the values may be of any type. - * As an example, - * - * ``` - * slog.Info("hello", "count", 3) + * slog.Info("hello", "count", 3) * ``` * * creates a record containing the time of the call, @@ -20961,124 +19439,2096 @@ namespace auth { * * For a guide to writing a custom handler, see https://golang.org/s/slog-handler-guide. */ -namespace slog { +namespace slog { + // @ts-ignore + import loginternal = internal + /** + * A Logger records structured information about each call to its + * Log, Debug, Info, Warn, and Error methods. + * For each call, it creates a [Record] and passes it to a [Handler]. + * + * To create a new Logger, call [New] or a Logger method + * that begins "With". + */ + interface Logger { + } + interface Logger { + /** + * Handler returns l's Handler. + */ + handler(): Handler + } + interface Logger { + /** + * With returns a Logger that includes the given attributes + * in each output operation. Arguments are converted to + * attributes as if by [Logger.Log]. + */ + with(...args: any[]): (Logger) + } + interface Logger { + /** + * WithGroup returns a Logger that starts a group, if name is non-empty. + * The keys of all attributes added to the Logger will be qualified by the given + * name. (How that qualification happens depends on the [Handler.WithGroup] + * method of the Logger's Handler.) + * + * If name is empty, WithGroup returns the receiver. + */ + withGroup(name: string): (Logger) + } + interface Logger { + /** + * Enabled reports whether l emits log records at the given context and level. + */ + enabled(ctx: context.Context, level: Level): boolean + } + interface Logger { + /** + * Log emits a log record with the current time and the given level and message. + * The Record's Attrs consist of the Logger's attributes followed by + * the Attrs specified by args. + * + * The attribute arguments are processed as follows: + * ``` + * - If an argument is an Attr, it is used as is. + * - If an argument is a string and this is not the last argument, + * the following argument is treated as the value and the two are combined + * into an Attr. + * - Otherwise, the argument is treated as a value with key "!BADKEY". + * ``` + */ + log(ctx: context.Context, level: Level, msg: string, ...args: any[]): void + } + interface Logger { + /** + * LogAttrs is a more efficient version of [Logger.Log] that accepts only Attrs. + */ + logAttrs(ctx: context.Context, level: Level, msg: string, ...attrs: Attr[]): void + } + interface Logger { + /** + * Debug logs at [LevelDebug]. + */ + debug(msg: string, ...args: any[]): void + } + interface Logger { + /** + * DebugContext logs at [LevelDebug] with the given context. + */ + debugContext(ctx: context.Context, msg: string, ...args: any[]): void + } + interface Logger { + /** + * Info logs at [LevelInfo]. + */ + info(msg: string, ...args: any[]): void + } + interface Logger { + /** + * InfoContext logs at [LevelInfo] with the given context. + */ + infoContext(ctx: context.Context, msg: string, ...args: any[]): void + } + interface Logger { + /** + * Warn logs at [LevelWarn]. + */ + warn(msg: string, ...args: any[]): void + } + interface Logger { + /** + * WarnContext logs at [LevelWarn] with the given context. + */ + warnContext(ctx: context.Context, msg: string, ...args: any[]): void + } + interface Logger { + /** + * Error logs at [LevelError]. + */ + error(msg: string, ...args: any[]): void + } + interface Logger { + /** + * ErrorContext logs at [LevelError] with the given context. + */ + errorContext(ctx: context.Context, msg: string, ...args: any[]): void + } +} + +namespace subscriptions { + /** + * Broker defines a struct for managing subscriptions clients. + */ + interface Broker { + } + interface Broker { + /** + * Clients returns a shallow copy of all registered clients indexed + * with their connection id. + */ + clients(): _TygojaDict + } + interface Broker { + /** + * ChunkedClients splits the current clients into a chunked slice. + */ + chunkedClients(chunkSize: number): Array> + } + interface Broker { + /** + * TotalClients returns the total number of registered clients. + */ + totalClients(): number + } + interface Broker { + /** + * ClientById finds a registered client by its id. + * + * Returns non-nil error when client with clientId is not registered. + */ + clientById(clientId: string): Client + } + interface Broker { + /** + * Register adds a new client to the broker instance. + */ + register(client: Client): void + } + interface Broker { + /** + * Unregister removes a single client by its id and marks it as discarded. + * + * If client with clientId doesn't exist, this method does nothing. + */ + unregister(clientId: string): void + } + /** + * Client is an interface for a generic subscription client. + */ + interface Client { + [key:string]: any; + /** + * Id Returns the unique id of the client. + */ + id(): string + /** + * Channel returns the client's communication channel. + * + * NB! The channel shouldn't be used after calling Discard(). + */ + channel(): undefined + /** + * Subscriptions returns a shallow copy of the client subscriptions matching the prefixes. + * If no prefix is specified, returns all subscriptions. + */ + subscriptions(...prefixes: string[]): _TygojaDict + /** + * Subscribe subscribes the client to the provided subscriptions list. + * + * Each subscription can also have "options" (json serialized SubscriptionOptions) as query parameter. + * + * Example: + * + * ``` + * Subscribe( + * "subscriptionA", + * `subscriptionB?options={"query":{"a":1},"headers":{"x_token":"abc"}}`, + * ) + * ``` + */ + subscribe(...subs: string[]): void + /** + * Unsubscribe unsubscribes the client from the provided subscriptions list. + */ + unsubscribe(...subs: string[]): void + /** + * HasSubscription checks if the client is subscribed to `sub`. + */ + hasSubscription(sub: string): boolean + /** + * Set stores any value to the client's context. + */ + set(key: string, value: any): void + /** + * Unset removes a single value from the client's context. + */ + unset(key: string): void + /** + * Get retrieves the key value from the client's context. + */ + get(key: string): any + /** + * Discard marks the client as "discarded" (and closes its channel), + * meaning that it shouldn't be used anymore for sending new messages. + * + * It is safe to call Discard() multiple times. + */ + discard(): void + /** + * IsDiscarded indicates whether the client has been "discarded" + * and should no longer be used. + */ + isDiscarded(): boolean + /** + * Send sends the specified message to the client's channel (if not discarded). + */ + send(m: Message): void + } + /** + * Message defines a client's channel data. + */ + interface Message { + name: string + data: string|Array + } + interface Message { + /** + * WriteSSE writes the current message in a SSE format into the provided writer. + * + * For example, writing to a router.Event: + * + * ``` + * m := Message{Name: "users/create", Data: []byte{...}} + * m.Write(e.Response, "yourEventId") + * e.Flush() + * ``` + */ + writeSSE(w: io.Writer, eventId: string): void + } +} + +/** + * Package cobra is a commander providing a simple interface to create powerful modern CLI interfaces. + * In addition to providing an interface, Cobra simultaneously provides a controller to organize your application code. + */ +namespace cobra { + interface Command { + /** + * GenBashCompletion generates bash completion file and writes to the passed writer. + */ + genBashCompletion(w: io.Writer): void + } + interface Command { + /** + * GenBashCompletionFile generates bash completion file. + */ + genBashCompletionFile(filename: string): void + } + interface Command { + /** + * GenBashCompletionFileV2 generates Bash completion version 2. + */ + genBashCompletionFileV2(filename: string, includeDesc: boolean): void + } + interface Command { + /** + * GenBashCompletionV2 generates Bash completion file version 2 + * and writes it to the passed writer. + */ + genBashCompletionV2(w: io.Writer, includeDesc: boolean): void + } + // @ts-ignore + import flag = pflag + /** + * Command is just that, a command for your application. + * E.g. 'go run ...' - 'run' is the command. Cobra requires + * you to define the usage and description as part of your command + * definition to ensure usability. + */ + interface Command { + /** + * Use is the one-line usage message. + * Recommended syntax is as follows: + * ``` + * [ ] identifies an optional argument. Arguments that are not enclosed in brackets are required. + * ... indicates that you can specify multiple values for the previous argument. + * | indicates mutually exclusive information. You can use the argument to the left of the separator or the + * argument to the right of the separator. You cannot use both arguments in a single use of the command. + * { } delimits a set of mutually exclusive arguments when one of the arguments is required. If the arguments are + * optional, they are enclosed in brackets ([ ]). + * ``` + * Example: add [-F file | -D dir]... [-f format] profile + */ + use: string + /** + * Aliases is an array of aliases that can be used instead of the first word in Use. + */ + aliases: Array + /** + * SuggestFor is an array of command names for which this command will be suggested - + * similar to aliases but only suggests. + */ + suggestFor: Array + /** + * Short is the short description shown in the 'help' output. + */ + short: string + /** + * The group id under which this subcommand is grouped in the 'help' output of its parent. + */ + groupID: string + /** + * Long is the long message shown in the 'help ' output. + */ + long: string + /** + * Example is examples of how to use the command. + */ + example: string + /** + * ValidArgs is list of all valid non-flag arguments that are accepted in shell completions + */ + validArgs: Array + /** + * ValidArgsFunction is an optional function that provides valid non-flag arguments for shell completion. + * It is a dynamic version of using ValidArgs. + * Only one of ValidArgs and ValidArgsFunction can be used for a command. + */ + validArgsFunction: CompletionFunc + /** + * Expected arguments + */ + args: PositionalArgs + /** + * ArgAliases is List of aliases for ValidArgs. + * These are not suggested to the user in the shell completion, + * but accepted if entered manually. + */ + argAliases: Array + /** + * BashCompletionFunction is custom bash functions used by the legacy bash autocompletion generator. + * For portability with other shells, it is recommended to instead use ValidArgsFunction + */ + bashCompletionFunction: string + /** + * Deprecated defines, if this command is deprecated and should print this string when used. + */ + deprecated: string + /** + * Annotations are key/value pairs that can be used by applications to identify or + * group commands or set special options. + */ + annotations: _TygojaDict + /** + * Version defines the version for this command. If this value is non-empty and the command does not + * define a "version" flag, a "version" boolean flag will be added to the command and, if specified, + * will print content of the "Version" variable. A shorthand "v" flag will also be added if the + * command does not define one. + */ + version: string + /** + * The *Run functions are executed in the following order: + * ``` + * * PersistentPreRun() + * * PreRun() + * * Run() + * * PostRun() + * * PersistentPostRun() + * ``` + * All functions get the same args, the arguments after the command name. + * The *PreRun and *PostRun functions will only be executed if the Run function of the current + * command has been declared. + * + * PersistentPreRun: children of this command will inherit and execute. + */ + persistentPreRun: (cmd: Command, args: Array) => void + /** + * PersistentPreRunE: PersistentPreRun but returns an error. + */ + persistentPreRunE: (cmd: Command, args: Array) => void + /** + * PreRun: children of this command will not inherit. + */ + preRun: (cmd: Command, args: Array) => void + /** + * PreRunE: PreRun but returns an error. + */ + preRunE: (cmd: Command, args: Array) => void + /** + * Run: Typically the actual work function. Most commands will only implement this. + */ + run: (cmd: Command, args: Array) => void + /** + * RunE: Run but returns an error. + */ + runE: (cmd: Command, args: Array) => void + /** + * PostRun: run after the Run command. + */ + postRun: (cmd: Command, args: Array) => void + /** + * PostRunE: PostRun but returns an error. + */ + postRunE: (cmd: Command, args: Array) => void + /** + * PersistentPostRun: children of this command will inherit and execute after PostRun. + */ + persistentPostRun: (cmd: Command, args: Array) => void + /** + * PersistentPostRunE: PersistentPostRun but returns an error. + */ + persistentPostRunE: (cmd: Command, args: Array) => void + /** + * FParseErrWhitelist flag parse errors to be ignored + */ + fParseErrWhitelist: FParseErrWhitelist + /** + * CompletionOptions is a set of options to control the handling of shell completion + */ + completionOptions: CompletionOptions + /** + * TraverseChildren parses flags on all parents before executing child command. + */ + traverseChildren: boolean + /** + * Hidden defines, if this command is hidden and should NOT show up in the list of available commands. + */ + hidden: boolean + /** + * SilenceErrors is an option to quiet errors down stream. + */ + silenceErrors: boolean + /** + * SilenceUsage is an option to silence usage when an error occurs. + */ + silenceUsage: boolean + /** + * DisableFlagParsing disables the flag parsing. + * If this is true all flags will be passed to the command as arguments. + */ + disableFlagParsing: boolean + /** + * DisableAutoGenTag defines, if gen tag ("Auto generated by spf13/cobra...") + * will be printed by generating docs for this command. + */ + disableAutoGenTag: boolean + /** + * DisableFlagsInUseLine will disable the addition of [flags] to the usage + * line of a command when printing help or generating docs + */ + disableFlagsInUseLine: boolean + /** + * DisableSuggestions disables the suggestions based on Levenshtein distance + * that go along with 'unknown command' messages. + */ + disableSuggestions: boolean + /** + * SuggestionsMinimumDistance defines minimum levenshtein distance to display suggestions. + * Must be > 0. + */ + suggestionsMinimumDistance: number + } + interface Command { + /** + * Context returns underlying command context. If command was executed + * with ExecuteContext or the context was set with SetContext, the + * previously set context will be returned. Otherwise, nil is returned. + * + * Notice that a call to Execute and ExecuteC will replace a nil context of + * a command with a context.Background, so a background context will be + * returned by Context after one of these functions has been called. + */ + context(): context.Context + } + interface Command { + /** + * SetContext sets context for the command. This context will be overwritten by + * Command.ExecuteContext or Command.ExecuteContextC. + */ + setContext(ctx: context.Context): void + } + interface Command { + /** + * SetArgs sets arguments for the command. It is set to os.Args[1:] by default, if desired, can be overridden + * particularly useful when testing. + */ + setArgs(a: Array): void + } + interface Command { + /** + * SetOutput sets the destination for usage and error messages. + * If output is nil, os.Stderr is used. + * + * Deprecated: Use SetOut and/or SetErr instead + */ + setOutput(output: io.Writer): void + } + interface Command { + /** + * SetOut sets the destination for usage messages. + * If newOut is nil, os.Stdout is used. + */ + setOut(newOut: io.Writer): void + } + interface Command { + /** + * SetErr sets the destination for error messages. + * If newErr is nil, os.Stderr is used. + */ + setErr(newErr: io.Writer): void + } + interface Command { + /** + * SetIn sets the source for input data + * If newIn is nil, os.Stdin is used. + */ + setIn(newIn: io.Reader): void + } + interface Command { + /** + * SetUsageFunc sets usage function. Usage can be defined by application. + */ + setUsageFunc(f: (_arg0: Command) => void): void + } + interface Command { + /** + * SetUsageTemplate sets usage template. Can be defined by Application. + */ + setUsageTemplate(s: string): void + } + interface Command { + /** + * SetFlagErrorFunc sets a function to generate an error when flag parsing + * fails. + */ + setFlagErrorFunc(f: (_arg0: Command, _arg1: Error) => void): void + } + interface Command { + /** + * SetHelpFunc sets help function. Can be defined by Application. + */ + setHelpFunc(f: (_arg0: Command, _arg1: Array) => void): void + } + interface Command { + /** + * SetHelpCommand sets help command. + */ + setHelpCommand(cmd: Command): void + } + interface Command { + /** + * SetHelpCommandGroupID sets the group id of the help command. + */ + setHelpCommandGroupID(groupID: string): void + } + interface Command { + /** + * SetCompletionCommandGroupID sets the group id of the completion command. + */ + setCompletionCommandGroupID(groupID: string): void + } + interface Command { + /** + * SetHelpTemplate sets help template to be used. Application can use it to set custom template. + */ + setHelpTemplate(s: string): void + } + interface Command { + /** + * SetVersionTemplate sets version template to be used. Application can use it to set custom template. + */ + setVersionTemplate(s: string): void + } + interface Command { + /** + * SetErrPrefix sets error message prefix to be used. Application can use it to set custom prefix. + */ + setErrPrefix(s: string): void + } + interface Command { + /** + * SetGlobalNormalizationFunc sets a normalization function to all flag sets and also to child commands. + * The user should not have a cyclic dependency on commands. + */ + setGlobalNormalizationFunc(n: (f: any, name: string) => any): void + } + interface Command { + /** + * OutOrStdout returns output to stdout. + */ + outOrStdout(): io.Writer + } + interface Command { + /** + * OutOrStderr returns output to stderr + */ + outOrStderr(): io.Writer + } + interface Command { + /** + * ErrOrStderr returns output to stderr + */ + errOrStderr(): io.Writer + } + interface Command { + /** + * InOrStdin returns input to stdin + */ + inOrStdin(): io.Reader + } + interface Command { + /** + * UsageFunc returns either the function set by SetUsageFunc for this command + * or a parent, or it returns a default usage function. + */ + usageFunc(): (_arg0: Command) => void + } + interface Command { + /** + * Usage puts out the usage for the command. + * Used when a user provides invalid input. + * Can be defined by user by overriding UsageFunc. + */ + usage(): void + } + interface Command { + /** + * HelpFunc returns either the function set by SetHelpFunc for this command + * or a parent, or it returns a function with default help behavior. + */ + helpFunc(): (_arg0: Command, _arg1: Array) => void + } + interface Command { + /** + * Help puts out the help for the command. + * Used when a user calls help [command]. + * Can be defined by user by overriding HelpFunc. + */ + help(): void + } + interface Command { + /** + * UsageString returns usage string. + */ + usageString(): string + } + interface Command { + /** + * FlagErrorFunc returns either the function set by SetFlagErrorFunc for this + * command or a parent, or it returns a function which returns the original + * error. + */ + flagErrorFunc(): (_arg0: Command, _arg1: Error) => void + } + interface Command { + /** + * UsagePadding return padding for the usage. + */ + usagePadding(): number + } + interface Command { + /** + * CommandPathPadding return padding for the command path. + */ + commandPathPadding(): number + } + interface Command { + /** + * NamePadding returns padding for the name. + */ + namePadding(): number + } + interface Command { + /** + * UsageTemplate returns usage template for the command. + * This function is kept for backwards-compatibility reasons. + */ + usageTemplate(): string + } + interface Command { + /** + * HelpTemplate return help template for the command. + * This function is kept for backwards-compatibility reasons. + */ + helpTemplate(): string + } + interface Command { + /** + * VersionTemplate return version template for the command. + * This function is kept for backwards-compatibility reasons. + */ + versionTemplate(): string + } + interface Command { + /** + * ErrPrefix return error message prefix for the command + */ + errPrefix(): string + } + interface Command { + /** + * Find the target command given the args and command tree + * Meant to be run on the highest node. Only searches down. + */ + find(args: Array): [(Command), Array] + } + interface Command { + /** + * Traverse the command tree to find the command, and parse args for + * each parent. + */ + traverse(args: Array): [(Command), Array] + } + interface Command { + /** + * SuggestionsFor provides suggestions for the typedName. + */ + suggestionsFor(typedName: string): Array + } + interface Command { + /** + * VisitParents visits all parents of the command and invokes fn on each parent. + */ + visitParents(fn: (_arg0: Command) => void): void + } + interface Command { + /** + * Root finds root command. + */ + root(): (Command) + } + interface Command { + /** + * ArgsLenAtDash will return the length of c.Flags().Args at the moment + * when a -- was found during args parsing. + */ + argsLenAtDash(): number + } + interface Command { + /** + * ExecuteContext is the same as Execute(), but sets the ctx on the command. + * Retrieve ctx by calling cmd.Context() inside your *Run lifecycle or ValidArgs + * functions. + */ + executeContext(ctx: context.Context): void + } + interface Command { + /** + * Execute uses the args (os.Args[1:] by default) + * and run through the command tree finding appropriate matches + * for commands and then corresponding flags. + */ + execute(): void + } + interface Command { + /** + * ExecuteContextC is the same as ExecuteC(), but sets the ctx on the command. + * Retrieve ctx by calling cmd.Context() inside your *Run lifecycle or ValidArgs + * functions. + */ + executeContextC(ctx: context.Context): (Command) + } + interface Command { + /** + * ExecuteC executes the command. + */ + executeC(): (Command) + } + interface Command { + validateArgs(args: Array): void + } + interface Command { + /** + * ValidateRequiredFlags validates all required flags are present and returns an error otherwise + */ + validateRequiredFlags(): void + } + interface Command { + /** + * InitDefaultHelpFlag adds default help flag to c. + * It is called automatically by executing the c or by calling help and usage. + * If c already has help flag, it will do nothing. + */ + initDefaultHelpFlag(): void + } + interface Command { + /** + * InitDefaultVersionFlag adds default version flag to c. + * It is called automatically by executing the c. + * If c already has a version flag, it will do nothing. + * If c.Version is empty, it will do nothing. + */ + initDefaultVersionFlag(): void + } + interface Command { + /** + * InitDefaultHelpCmd adds default help command to c. + * It is called automatically by executing the c or by calling help and usage. + * If c already has help command or c has no subcommands, it will do nothing. + */ + initDefaultHelpCmd(): void + } + interface Command { + /** + * ResetCommands delete parent, subcommand and help command from c. + */ + resetCommands(): void + } + interface Command { + /** + * Commands returns a sorted slice of child commands. + */ + commands(): Array<(Command | undefined)> + } + interface Command { + /** + * AddCommand adds one or more commands to this parent command. + */ + addCommand(...cmds: (Command | undefined)[]): void + } + interface Command { + /** + * Groups returns a slice of child command groups. + */ + groups(): Array<(Group | undefined)> + } + interface Command { + /** + * AllChildCommandsHaveGroup returns if all subcommands are assigned to a group + */ + allChildCommandsHaveGroup(): boolean + } + interface Command { + /** + * ContainsGroup return if groupID exists in the list of command groups. + */ + containsGroup(groupID: string): boolean + } + interface Command { + /** + * AddGroup adds one or more command groups to this parent command. + */ + addGroup(...groups: (Group | undefined)[]): void + } + interface Command { + /** + * RemoveCommand removes one or more commands from a parent command. + */ + removeCommand(...cmds: (Command | undefined)[]): void + } + interface Command { + /** + * Print is a convenience method to Print to the defined output, fallback to Stderr if not set. + */ + print(...i: { + }[]): void + } + interface Command { + /** + * Println is a convenience method to Println to the defined output, fallback to Stderr if not set. + */ + println(...i: { + }[]): void + } + interface Command { + /** + * Printf is a convenience method to Printf to the defined output, fallback to Stderr if not set. + */ + printf(format: string, ...i: { + }[]): void + } + interface Command { + /** + * PrintErr is a convenience method to Print to the defined Err output, fallback to Stderr if not set. + */ + printErr(...i: { + }[]): void + } + interface Command { + /** + * PrintErrln is a convenience method to Println to the defined Err output, fallback to Stderr if not set. + */ + printErrln(...i: { + }[]): void + } + interface Command { + /** + * PrintErrf is a convenience method to Printf to the defined Err output, fallback to Stderr if not set. + */ + printErrf(format: string, ...i: { + }[]): void + } + interface Command { + /** + * CommandPath returns the full path to this command. + */ + commandPath(): string + } + interface Command { + /** + * DisplayName returns the name to display in help text. Returns command Name() + * If CommandDisplayNameAnnoation is not set + */ + displayName(): string + } + interface Command { + /** + * UseLine puts out the full usage for a given command (including parents). + */ + useLine(): string + } + interface Command { + /** + * DebugFlags used to determine which flags have been assigned to which commands + * and which persist. + */ + debugFlags(): void + } + interface Command { + /** + * Name returns the command's name: the first word in the use line. + */ + name(): string + } + interface Command { + /** + * HasAlias determines if a given string is an alias of the command. + */ + hasAlias(s: string): boolean + } + interface Command { + /** + * CalledAs returns the command name or alias that was used to invoke + * this command or an empty string if the command has not been called. + */ + calledAs(): string + } + interface Command { + /** + * NameAndAliases returns a list of the command name and all aliases + */ + nameAndAliases(): string + } + interface Command { + /** + * HasExample determines if the command has example. + */ + hasExample(): boolean + } + interface Command { + /** + * Runnable determines if the command is itself runnable. + */ + runnable(): boolean + } + interface Command { + /** + * HasSubCommands determines if the command has children commands. + */ + hasSubCommands(): boolean + } + interface Command { + /** + * IsAvailableCommand determines if a command is available as a non-help command + * (this includes all non deprecated/hidden commands). + */ + isAvailableCommand(): boolean + } + interface Command { + /** + * IsAdditionalHelpTopicCommand determines if a command is an additional + * help topic command; additional help topic command is determined by the + * fact that it is NOT runnable/hidden/deprecated, and has no sub commands that + * are runnable/hidden/deprecated. + * Concrete example: https://github.com/spf13/cobra/issues/393#issuecomment-282741924. + */ + isAdditionalHelpTopicCommand(): boolean + } + interface Command { + /** + * HasHelpSubCommands determines if a command has any available 'help' sub commands + * that need to be shown in the usage/help default template under 'additional help + * topics'. + */ + hasHelpSubCommands(): boolean + } + interface Command { + /** + * HasAvailableSubCommands determines if a command has available sub commands that + * need to be shown in the usage/help default template under 'available commands'. + */ + hasAvailableSubCommands(): boolean + } + interface Command { + /** + * HasParent determines if the command is a child command. + */ + hasParent(): boolean + } + interface Command { + /** + * GlobalNormalizationFunc returns the global normalization function or nil if it doesn't exist. + */ + globalNormalizationFunc(): (f: any, name: string) => any + } + interface Command { + /** + * Flags returns the complete FlagSet that applies + * to this command (local and persistent declared here and by all parents). + */ + flags(): (any) + } + interface Command { + /** + * LocalNonPersistentFlags are flags specific to this command which will NOT persist to subcommands. + * This function does not modify the flags of the current command, it's purpose is to return the current state. + */ + localNonPersistentFlags(): (any) + } + interface Command { + /** + * LocalFlags returns the local FlagSet specifically set in the current command. + * This function does not modify the flags of the current command, it's purpose is to return the current state. + */ + localFlags(): (any) + } + interface Command { + /** + * InheritedFlags returns all flags which were inherited from parent commands. + * This function does not modify the flags of the current command, it's purpose is to return the current state. + */ + inheritedFlags(): (any) + } + interface Command { + /** + * NonInheritedFlags returns all flags which were not inherited from parent commands. + * This function does not modify the flags of the current command, it's purpose is to return the current state. + */ + nonInheritedFlags(): (any) + } + interface Command { + /** + * PersistentFlags returns the persistent FlagSet specifically set in the current command. + */ + persistentFlags(): (any) + } + interface Command { + /** + * ResetFlags deletes all flags from command. + */ + resetFlags(): void + } + interface Command { + /** + * HasFlags checks if the command contains any flags (local plus persistent from the entire structure). + */ + hasFlags(): boolean + } + interface Command { + /** + * HasPersistentFlags checks if the command contains persistent flags. + */ + hasPersistentFlags(): boolean + } + interface Command { + /** + * HasLocalFlags checks if the command has flags specifically declared locally. + */ + hasLocalFlags(): boolean + } + interface Command { + /** + * HasInheritedFlags checks if the command has flags inherited from its parent command. + */ + hasInheritedFlags(): boolean + } + interface Command { + /** + * HasAvailableFlags checks if the command contains any flags (local plus persistent from the entire + * structure) which are not hidden or deprecated. + */ + hasAvailableFlags(): boolean + } + interface Command { + /** + * HasAvailablePersistentFlags checks if the command contains persistent flags which are not hidden or deprecated. + */ + hasAvailablePersistentFlags(): boolean + } + interface Command { + /** + * HasAvailableLocalFlags checks if the command has flags specifically declared locally which are not hidden + * or deprecated. + */ + hasAvailableLocalFlags(): boolean + } + interface Command { + /** + * HasAvailableInheritedFlags checks if the command has flags inherited from its parent command which are + * not hidden or deprecated. + */ + hasAvailableInheritedFlags(): boolean + } + interface Command { + /** + * Flag climbs up the command tree looking for matching flag. + */ + flag(name: string): (any) + } + interface Command { + /** + * ParseFlags parses persistent flag tree and local flags. + */ + parseFlags(args: Array): void + } + interface Command { + /** + * Parent returns a commands parent command. + */ + parent(): (Command) + } + interface Command { + /** + * RegisterFlagCompletionFunc should be called to register a function to provide completion for a flag. + * + * You can use pre-defined completion functions such as [FixedCompletions] or [NoFileCompletions], + * or you can define your own. + */ + registerFlagCompletionFunc(flagName: string, f: CompletionFunc): void + } + interface Command { + /** + * GetFlagCompletionFunc returns the completion function for the given flag of the command, if available. + */ + getFlagCompletionFunc(flagName: string): [CompletionFunc, boolean] + } + interface Command { + /** + * InitDefaultCompletionCmd adds a default 'completion' command to c. + * This function will do nothing if any of the following is true: + * 1- the feature has been explicitly disabled by the program, + * 2- c has no subcommands (to avoid creating one), + * 3- c already has a 'completion' command provided by the program. + */ + initDefaultCompletionCmd(...args: string[]): void + } + interface Command { + /** + * GenFishCompletion generates fish completion file and writes to the passed writer. + */ + genFishCompletion(w: io.Writer, includeDesc: boolean): void + } + interface Command { + /** + * GenFishCompletionFile generates fish completion file. + */ + genFishCompletionFile(filename: string, includeDesc: boolean): void + } + interface Command { + /** + * MarkFlagsRequiredTogether marks the given flags with annotations so that Cobra errors + * if the command is invoked with a subset (but not all) of the given flags. + */ + markFlagsRequiredTogether(...flagNames: string[]): void + } + interface Command { + /** + * MarkFlagsOneRequired marks the given flags with annotations so that Cobra errors + * if the command is invoked without at least one flag from the given set of flags. + */ + markFlagsOneRequired(...flagNames: string[]): void + } + interface Command { + /** + * MarkFlagsMutuallyExclusive marks the given flags with annotations so that Cobra errors + * if the command is invoked with more than one flag from the given set of flags. + */ + markFlagsMutuallyExclusive(...flagNames: string[]): void + } + interface Command { + /** + * ValidateFlagGroups validates the mutuallyExclusive/oneRequired/requiredAsGroup logic and returns the + * first error encountered. + */ + validateFlagGroups(): void + } + interface Command { + /** + * GenPowerShellCompletionFile generates powershell completion file without descriptions. + */ + genPowerShellCompletionFile(filename: string): void + } + interface Command { + /** + * GenPowerShellCompletion generates powershell completion file without descriptions + * and writes it to the passed writer. + */ + genPowerShellCompletion(w: io.Writer): void + } + interface Command { + /** + * GenPowerShellCompletionFileWithDesc generates powershell completion file with descriptions. + */ + genPowerShellCompletionFileWithDesc(filename: string): void + } + interface Command { + /** + * GenPowerShellCompletionWithDesc generates powershell completion file with descriptions + * and writes it to the passed writer. + */ + genPowerShellCompletionWithDesc(w: io.Writer): void + } + interface Command { + /** + * MarkFlagRequired instructs the various shell completion implementations to + * prioritize the named flag when performing completion, + * and causes your command to report an error if invoked without the flag. + */ + markFlagRequired(name: string): void + } + interface Command { + /** + * MarkPersistentFlagRequired instructs the various shell completion implementations to + * prioritize the named persistent flag when performing completion, + * and causes your command to report an error if invoked without the flag. + */ + markPersistentFlagRequired(name: string): void + } + interface Command { + /** + * MarkFlagFilename instructs the various shell completion implementations to + * limit completions for the named flag to the specified file extensions. + */ + markFlagFilename(name: string, ...extensions: string[]): void + } + interface Command { + /** + * MarkFlagCustom adds the BashCompCustom annotation to the named flag, if it exists. + * The bash completion script will call the bash function f for the flag. + * + * This will only work for bash completion. + * It is recommended to instead use c.RegisterFlagCompletionFunc(...) which allows + * to register a Go function which will work across all shells. + */ + markFlagCustom(name: string, f: string): void + } + interface Command { + /** + * MarkPersistentFlagFilename instructs the various shell completion + * implementations to limit completions for the named persistent flag to the + * specified file extensions. + */ + markPersistentFlagFilename(name: string, ...extensions: string[]): void + } + interface Command { + /** + * MarkFlagDirname instructs the various shell completion implementations to + * limit completions for the named flag to directory names. + */ + markFlagDirname(name: string): void + } + interface Command { + /** + * MarkPersistentFlagDirname instructs the various shell completion + * implementations to limit completions for the named persistent flag to + * directory names. + */ + markPersistentFlagDirname(name: string): void + } + interface Command { + /** + * GenZshCompletionFile generates zsh completion file including descriptions. + */ + genZshCompletionFile(filename: string): void + } + interface Command { + /** + * GenZshCompletion generates zsh completion file including descriptions + * and writes it to the passed writer. + */ + genZshCompletion(w: io.Writer): void + } + interface Command { + /** + * GenZshCompletionFileNoDesc generates zsh completion file without descriptions. + */ + genZshCompletionFileNoDesc(filename: string): void + } + interface Command { + /** + * GenZshCompletionNoDesc generates zsh completion file without descriptions + * and writes it to the passed writer. + */ + genZshCompletionNoDesc(w: io.Writer): void + } + interface Command { + /** + * MarkZshCompPositionalArgumentFile only worked for zsh and its behavior was + * not consistent with Bash completion. It has therefore been disabled. + * Instead, when no other completion is specified, file completion is done by + * default for every argument. One can disable file completion on a per-argument + * basis by using ValidArgsFunction and ShellCompDirectiveNoFileComp. + * To achieve file extension filtering, one can use ValidArgsFunction and + * ShellCompDirectiveFilterFileExt. + * + * Deprecated + */ + markZshCompPositionalArgumentFile(argPosition: number, ...patterns: string[]): void + } + interface Command { + /** + * MarkZshCompPositionalArgumentWords only worked for zsh. It has therefore + * been disabled. + * To achieve the same behavior across all shells, one can use + * ValidArgs (for the first argument only) or ValidArgsFunction for + * any argument (can include the first one also). + * + * Deprecated + */ + markZshCompPositionalArgumentWords(argPosition: number, ...words: string[]): void + } +} + +/** + * Package types implements some commonly used db serializable types + * like datetime, json, etc. + */ +namespace types { + /** + * DateTime represents a [time.Time] instance in UTC that is wrapped + * and serialized using the app default date layout. + */ + interface DateTime { + } + interface DateTime { + /** + * Time returns the internal [time.Time] instance. + */ + time(): time.Time + } + interface DateTime { + /** + * Add returns a new DateTime based on the current DateTime + the specified duration. + */ + add(duration: time.Duration): DateTime + } + interface DateTime { + /** + * Sub returns a [time.Duration] by subtracting the specified DateTime from the current one. + * + * If the result exceeds the maximum (or minimum) value that can be stored in a [time.Duration], + * the maximum (or minimum) duration will be returned. + */ + sub(u: DateTime): time.Duration + } + interface DateTime { + /** + * AddDate returns a new DateTime based on the current one + duration. + * + * It follows the same rules as [time.AddDate]. + */ + addDate(years: number, months: number, days: number): DateTime + } + interface DateTime { + /** + * After reports whether the current DateTime instance is after u. + */ + after(u: DateTime): boolean + } + interface DateTime { + /** + * Before reports whether the current DateTime instance is before u. + */ + before(u: DateTime): boolean + } + interface DateTime { + /** + * Compare compares the current DateTime instance with u. + * If the current instance is before u, it returns -1. + * If the current instance is after u, it returns +1. + * If they're the same, it returns 0. + */ + compare(u: DateTime): number + } + interface DateTime { + /** + * Equal reports whether the current DateTime and u represent the same time instant. + * Two DateTime can be equal even if they are in different locations. + * For example, 6:00 +0200 and 4:00 UTC are Equal. + */ + equal(u: DateTime): boolean + } + interface DateTime { + /** + * Unix returns the current DateTime as a Unix time, aka. + * the number of seconds elapsed since January 1, 1970 UTC. + */ + unix(): number + } + interface DateTime { + /** + * IsZero checks whether the current DateTime instance has zero time value. + */ + isZero(): boolean + } + interface DateTime { + /** + * String serializes the current DateTime instance into a formatted + * UTC date string. + * + * The zero value is serialized to an empty string. + */ + string(): string + } + interface DateTime { + /** + * MarshalJSON implements the [json.Marshaler] interface. + */ + marshalJSON(): string|Array + } + interface DateTime { + /** + * UnmarshalJSON implements the [json.Unmarshaler] interface. + */ + unmarshalJSON(b: string|Array): void + } + interface DateTime { + /** + * Value implements the [driver.Valuer] interface. + */ + value(): any + } + interface DateTime { + /** + * Scan implements [sql.Scanner] interface to scan the provided value + * into the current DateTime instance. + */ + scan(value: any): void + } + /** + * GeoPoint defines a struct for storing geo coordinates as serialized json object + * (e.g. {lon:0,lat:0}). + * + * Note: using object notation and not a plain array to avoid the confusion + * as there doesn't seem to be a fixed standard for the coordinates order. + */ + interface GeoPoint { + lon: number + lat: number + } + interface GeoPoint { + /** + * String returns the string representation of the current GeoPoint instance. + */ + string(): string + } + interface GeoPoint { + /** + * AsMap implements [core.mapExtractor] and returns a value suitable + * to be used in an API rule expression. + */ + asMap(): _TygojaDict + } + interface GeoPoint { + /** + * Value implements the [driver.Valuer] interface. + */ + value(): any + } + interface GeoPoint { + /** + * Scan implements [sql.Scanner] interface to scan the provided value + * into the current GeoPoint instance. + * + * The value argument could be nil (no-op), another GeoPoint instance, + * map or serialized json object with lat-lon props. + */ + scan(value: any): void + } + /** + * JSONArray defines a slice that is safe for json and db read/write. + */ + interface JSONArray extends Array{} + interface JSONArray { + /** + * MarshalJSON implements the [json.Marshaler] interface. + */ + marshalJSON(): string|Array + } + interface JSONArray { + /** + * String returns the string representation of the current json array. + */ + string(): string + } + interface JSONArray { + /** + * Value implements the [driver.Valuer] interface. + */ + value(): any + } + interface JSONArray { + /** + * Scan implements [sql.Scanner] interface to scan the provided value + * into the current JSONArray[T] instance. + */ + scan(value: any): void + } + /** + * JSONMap defines a map that is safe for json and db read/write. + */ + interface JSONMap extends _TygojaDict{} + interface JSONMap { + /** + * MarshalJSON implements the [json.Marshaler] interface. + */ + marshalJSON(): string|Array + } + interface JSONMap { + /** + * String returns the string representation of the current json map. + */ + string(): string + } + interface JSONMap { + /** + * Get retrieves a single value from the current JSONMap[T]. + * + * This helper was added primarily to assist the goja integration since custom map types + * don't have direct access to the map keys (https://pkg.go.dev/github.com/dop251/goja#hdr-Maps_with_methods). + */ + get(key: string): T + } + interface JSONMap { + /** + * Set sets a single value in the current JSONMap[T]. + * + * This helper was added primarily to assist the goja integration since custom map types + * don't have direct access to the map keys (https://pkg.go.dev/github.com/dop251/goja#hdr-Maps_with_methods). + */ + set(key: string, value: T): void + } + interface JSONMap { + /** + * Value implements the [driver.Valuer] interface. + */ + value(): any + } + interface JSONMap { + /** + * Scan implements [sql.Scanner] interface to scan the provided value + * into the current JSONMap[T] instance. + */ + scan(value: any): void + } + /** + * JSONRaw defines a json value type that is safe for db read/write. + */ + interface JSONRaw extends Array{} + interface JSONRaw { + /** + * String returns the current JSONRaw instance as a json encoded string. + */ + string(): string + } + interface JSONRaw { + /** + * MarshalJSON implements the [json.Marshaler] interface. + */ + marshalJSON(): string|Array + } + interface JSONRaw { + /** + * UnmarshalJSON implements the [json.Unmarshaler] interface. + */ + unmarshalJSON(b: string|Array): void + } + interface JSONRaw { + /** + * Value implements the [driver.Valuer] interface. + */ + value(): any + } + interface JSONRaw { + /** + * Scan implements [sql.Scanner] interface to scan the provided value + * into the current JSONRaw instance. + */ + scan(value: any): void + } +} + +namespace auth { + /** + * Provider defines a common interface for an OAuth2 client. + */ + interface Provider { + [key:string]: any; + /** + * Context returns the context associated with the provider (if any). + */ + context(): context.Context + /** + * SetContext assigns the specified context to the current provider. + */ + setContext(ctx: context.Context): void + /** + * PKCE indicates whether the provider can use the PKCE flow. + */ + pkce(): boolean + /** + * SetPKCE toggles the state whether the provider can use the PKCE flow or not. + */ + setPKCE(enable: boolean): void + /** + * DisplayName usually returns provider name as it is officially written + * and it could be used directly in the UI. + */ + displayName(): string + /** + * SetDisplayName sets the provider's display name. + */ + setDisplayName(displayName: string): void + /** + * Scopes returns the provider access permissions that will be requested. + */ + scopes(): Array + /** + * SetScopes sets the provider access permissions that will be requested later. + */ + setScopes(scopes: Array): void + /** + * ClientId returns the provider client's app ID. + */ + clientId(): string + /** + * SetClientId sets the provider client's ID. + */ + setClientId(clientId: string): void + /** + * ClientSecret returns the provider client's app secret. + */ + clientSecret(): string + /** + * SetClientSecret sets the provider client's app secret. + */ + setClientSecret(secret: string): void + /** + * RedirectURL returns the end address to redirect the user + * going through the OAuth flow. + */ + redirectURL(): string + /** + * SetRedirectURL sets the provider's RedirectURL. + */ + setRedirectURL(url: string): void + /** + * AuthURL returns the provider's authorization service url. + */ + authURL(): string + /** + * SetAuthURL sets the provider's AuthURL. + */ + setAuthURL(url: string): void + /** + * TokenURL returns the provider's token exchange service url. + */ + tokenURL(): string + /** + * SetTokenURL sets the provider's TokenURL. + */ + setTokenURL(url: string): void + /** + * UserInfoURL returns the provider's user info api url. + */ + userInfoURL(): string + /** + * SetUserInfoURL sets the provider's UserInfoURL. + */ + setUserInfoURL(url: string): void + /** + * Extra returns a shallow copy of any custom config data + * that the provider may be need. + */ + extra(): _TygojaDict + /** + * SetExtra updates the provider's custom config data. + */ + setExtra(data: _TygojaDict): void + /** + * Client returns an http client using the provided token. + */ + client(token: oauth2.Token): (any) + /** + * BuildAuthURL returns a URL to the provider's consent page + * that asks for permissions for the required scopes explicitly. + */ + buildAuthURL(state: string, ...opts: oauth2.AuthCodeOption[]): string + /** + * FetchToken converts an authorization code to token. + */ + fetchToken(code: string, ...opts: oauth2.AuthCodeOption[]): (oauth2.Token) + /** + * FetchRawUserInfo requests and marshalizes into `result` the + * the OAuth user api response. + */ + fetchRawUserInfo(token: oauth2.Token): string|Array + /** + * FetchAuthUser is similar to FetchRawUserInfo, but normalizes and + * marshalizes the user api response into a standardized AuthUser struct. + */ + fetchAuthUser(token: oauth2.Token): (AuthUser) + } + /** + * AuthUser defines a standardized OAuth2 user data structure. + */ + interface AuthUser { + expiry: types.DateTime + rawUser: _TygojaDict + id: string + name: string + username: string + email: string + avatarURL: string + accessToken: string + refreshToken: string + /** + * @todo + * deprecated: use AvatarURL instead + * AvatarUrl will be removed after dropping v0.22 support + */ + avatarUrl: string + } + interface AuthUser { + /** + * MarshalJSON implements the [json.Marshaler] interface. + * + * @todo remove after dropping v0.22 support + */ + marshalJSON(): string|Array + } +} + +namespace search { + /** + * Result defines the returned search result structure. + */ + interface Result { + items: any + page: number + perPage: number + totalItems: number + totalPages: number + } + /** + * ResolverResult defines a single FieldResolver.Resolve() successfully parsed result. + */ + interface ResolverResult { + /** + * Identifier is the plain SQL identifier/column that will be used + * in the final db expression as left or right operand. + */ + identifier: string + /** + * NoCoalesce instructs to not use COALESCE or NULL fallbacks + * when building the identifier expression. + */ + noCoalesce: boolean + /** + * Params is a map with db placeholder->value pairs that will be added + * to the query when building both resolved operands/sides in a single expression. + */ + params: dbx.Params + /** + * MultiMatchSubQuery is an optional sub query expression that will be added + * in addition to the combined ResolverResult expression during build. + */ + multiMatchSubQuery: dbx.Expression + /** + * AfterBuild is an optional function that will be called after building + * and combining the result of both resolved operands/sides in a single expression. + */ + afterBuild: (expr: dbx.Expression) => dbx.Expression + } +} + +namespace router { // @ts-ignore - import loginternal = internal + import validation = ozzo_validation + /** + * ApiError defines the struct for a basic api error response. + */ + interface ApiError { + data: _TygojaDict + message: string + status: number + } + interface ApiError { + /** + * Error makes it compatible with the `error` interface. + */ + error(): string + } + interface ApiError { + /** + * RawData returns the unformatted error data (could be an internal error, text, etc.) + */ + rawData(): any + } + interface ApiError { + /** + * Is reports whether the current ApiError wraps the target. + */ + is(target: Error): boolean + } /** - * A Logger records structured information about each call to its - * Log, Debug, Info, Warn, and Error methods. - * For each call, it creates a [Record] and passes it to a [Handler]. + * Event specifies based Route handler event that is usually intended + * to be embedded as part of a custom event struct. * - * To create a new Logger, call [New] or a Logger method - * that begins "With". + * NB! It is expected that the Response and Request fields are always set. */ - interface Logger { + type _syxNYVK = hook.Event + interface Event extends _syxNYVK { + response: http.ResponseWriter + request?: http.Request } - interface Logger { + interface Event { /** - * Handler returns l's Handler. + * Written reports whether the current response has already been written. + * + * This method always returns false if e.ResponseWritter doesn't implement the WriteTracker interface + * (all router package handlers receives a ResponseWritter that implements it unless explicitly replaced with a custom one). */ - handler(): Handler + written(): boolean } - interface Logger { + interface Event { /** - * With returns a Logger that includes the given attributes - * in each output operation. Arguments are converted to - * attributes as if by [Logger.Log]. + * Status reports the status code of the current response. + * + * This method always returns 0 if e.Response doesn't implement the StatusTracker interface + * (all router package handlers receives a ResponseWritter that implements it unless explicitly replaced with a custom one). */ - with(...args: any[]): (Logger) + status(): number } - interface Logger { + interface Event { /** - * WithGroup returns a Logger that starts a group, if name is non-empty. - * The keys of all attributes added to the Logger will be qualified by the given - * name. (How that qualification happens depends on the [Handler.WithGroup] - * method of the Logger's Handler.) + * Flush flushes buffered data to the current response. * - * If name is empty, WithGroup returns the receiver. + * Returns [http.ErrNotSupported] if e.Response doesn't implement the [http.Flusher] interface + * (all router package handlers receives a ResponseWritter that implements it unless explicitly replaced with a custom one). */ - withGroup(name: string): (Logger) + flush(): void } - interface Logger { + interface Event { /** - * Enabled reports whether l emits log records at the given context and level. + * IsTLS reports whether the connection on which the request was received is TLS. */ - enabled(ctx: context.Context, level: Level): boolean + isTLS(): boolean } - interface Logger { + interface Event { /** - * Log emits a log record with the current time and the given level and message. - * The Record's Attrs consist of the Logger's attributes followed by - * the Attrs specified by args. + * SetCookie is an alias for [http.SetCookie]. * - * The attribute arguments are processed as follows: - * ``` - * - If an argument is an Attr, it is used as is. - * - If an argument is a string and this is not the last argument, - * the following argument is treated as the value and the two are combined - * into an Attr. - * - Otherwise, the argument is treated as a value with key "!BADKEY". - * ``` + * SetCookie adds a Set-Cookie header to the current response's headers. + * The provided cookie must have a valid Name. + * Invalid cookies may be silently dropped. */ - log(ctx: context.Context, level: Level, msg: string, ...args: any[]): void + setCookie(cookie: http.Cookie): void } - interface Logger { + interface Event { /** - * LogAttrs is a more efficient version of [Logger.Log] that accepts only Attrs. + * RemoteIP returns the IP address of the client that sent the request. + * + * IPv6 addresses are returned expanded. + * For example, "2001:db8::1" becomes "2001:0db8:0000:0000:0000:0000:0000:0001". + * + * Note that if you are behind reverse proxy(ies), this method returns + * the IP of the last connecting proxy. */ - logAttrs(ctx: context.Context, level: Level, msg: string, ...attrs: Attr[]): void + remoteIP(): string } - interface Logger { + interface Event { /** - * Debug logs at [LevelDebug]. + * FindUploadedFiles extracts all form files of "key" from a http request + * and returns a slice with filesystem.File instances (if any). */ - debug(msg: string, ...args: any[]): void + findUploadedFiles(key: string): Array<(filesystem.File | undefined)> } - interface Logger { + interface Event { /** - * DebugContext logs at [LevelDebug] with the given context. + * Get retrieves single value from the current event data store. */ - debugContext(ctx: context.Context, msg: string, ...args: any[]): void + get(key: string): any } - interface Logger { + interface Event { /** - * Info logs at [LevelInfo]. + * GetAll returns a copy of the current event data store. */ - info(msg: string, ...args: any[]): void + getAll(): _TygojaDict } - interface Logger { + interface Event { /** - * InfoContext logs at [LevelInfo] with the given context. + * Set saves single value into the current event data store. */ - infoContext(ctx: context.Context, msg: string, ...args: any[]): void + set(key: string, value: any): void } - interface Logger { + interface Event { /** - * Warn logs at [LevelWarn]. + * SetAll saves all items from m into the current event data store. */ - warn(msg: string, ...args: any[]): void + setAll(m: _TygojaDict): void } - interface Logger { + interface Event { /** - * WarnContext logs at [LevelWarn] with the given context. + * String writes a plain string response. */ - warnContext(ctx: context.Context, msg: string, ...args: any[]): void + string(status: number, data: string): void } - interface Logger { + interface Event { /** - * Error logs at [LevelError]. + * HTML writes an HTML response. */ - error(msg: string, ...args: any[]): void + html(status: number, data: string): void + } + interface Event { + /** + * JSON writes a JSON response. + * + * It also provides a generic response data fields picker if the "fields" query parameter is set. + * For example, if you are requesting `?fields=a,b` for `e.JSON(200, map[string]int{ "a":1, "b":2, "c":3 })`, + * it should result in a JSON response like: `{"a":1, "b": 2}`. + */ + json(status: number, data: any): void + } + interface Event { + /** + * XML writes an XML response. + * It automatically prepends the generic [xml.Header] string to the response. + */ + xml(status: number, data: any): void + } + interface Event { + /** + * Stream streams the specified reader into the response. + */ + stream(status: number, contentType: string, reader: io.Reader): void + } + interface Event { + /** + * Blob writes a blob (bytes slice) response. + */ + blob(status: number, contentType: string, b: string|Array): void + } + interface Event { + /** + * FileFS serves the specified filename from fsys. + * + * It is similar to [echo.FileFS] for consistency with earlier versions. + */ + fileFS(fsys: fs.FS, filename: string): void + } + interface Event { + /** + * NoContent writes a response with no body (ex. 204). + */ + noContent(status: number): void + } + interface Event { + /** + * Redirect writes a redirect response to the specified url. + * The status code must be in between 300 – 399 range. + */ + redirect(status: number, url: string): void + } + interface Event { + error(status: number, message: string, errData: any): (ApiError) + } + interface Event { + badRequestError(message: string, errData: any): (ApiError) + } + interface Event { + notFoundError(message: string, errData: any): (ApiError) + } + interface Event { + forbiddenError(message: string, errData: any): (ApiError) + } + interface Event { + unauthorizedError(message: string, errData: any): (ApiError) + } + interface Event { + tooManyRequestsError(message: string, errData: any): (ApiError) + } + interface Event { + internalServerError(message: string, errData: any): (ApiError) + } + interface Event { + /** + * BindBody unmarshal the request body into the provided dst. + * + * dst must be either a struct pointer or map[string]any. + * + * The rules how the body will be scanned depends on the request Content-Type. + * + * Currently the following Content-Types are supported: + * ``` + * - application/json + * - text/xml, application/xml + * - multipart/form-data, application/x-www-form-urlencoded + * ``` + * + * Respectively the following struct tags are supported (again, which one will be used depends on the Content-Type): + * ``` + * - "json" (json body)- uses the builtin Go json package for unmarshaling. + * - "xml" (xml body) - uses the builtin Go xml package for unmarshaling. + * - "form" (form data) - utilizes the custom [router.UnmarshalRequestData] method. + * ``` + * + * NB! When dst is a struct make sure that it doesn't have public fields + * that shouldn't be bindable and it is advisible such fields to be unexported + * or have a separate struct just for the binding. For example: + * + * ``` + * data := struct{ + * somethingPrivate string + * + * Title string `json:"title" form:"title"` + * Total int `json:"total" form:"total"` + * } + * err := e.BindBody(&data) + * ``` + */ + bindBody(dst: any): void + } + /** + * Router defines a thin wrapper around the standard Go [http.ServeMux] by + * adding support for routing sub-groups, middlewares and other common utils. + * + * Example: + * + * ``` + * r := NewRouter[*MyEvent](eventFactory) + * + * // middlewares + * r.BindFunc(m1, m2) + * + * // routes + * r.GET("/test", handler1) + * + * // sub-routers/groups + * api := r.Group("/api") + * api.GET("/admins", handler2) + * + * // generate a http.ServeMux instance based on the router configurations + * mux, _ := r.BuildMux() + * + * http.ListenAndServe("localhost:8090", mux) + * ``` + */ + type _sRtBcnV = RouterGroup + interface Router extends _sRtBcnV { } - interface Logger { + interface Router { /** - * ErrorContext logs at [LevelError] with the given context. + * BuildMux constructs a new mux [http.Handler] instance from the current router configurations. */ - errorContext(ctx: context.Context, msg: string, ...args: any[]): void + buildMux(): http.Handler } } namespace sync { + // @ts-ignore + import isync = sync /** * A Locker represents an object that can be locked and unlocked. */ @@ -21089,16 +21539,9 @@ namespace sync { } } -namespace io { - /** - * WriteCloser is the interface that groups the basic Write and Close methods. - */ - interface WriteCloser { - [key:string]: any; - } -} - namespace syscall { + // @ts-ignore + import errpkg = errors /** * SysProcIDMap holds Container ID to Host ID mappings used for User Namespaces in Linux. * See user_namespaces(7). @@ -21184,6 +21627,15 @@ namespace time { namespace context { } +namespace io { + /** + * WriteCloser is the interface that groups the basic Write and Close methods. + */ + interface WriteCloser { + [key:string]: any; + } +} + namespace fs { } @@ -21200,27 +21652,6 @@ namespace net { network(): string // name of the network (for example, "tcp", "udp") string(): string // string form of address (for example, "192.0.2.1:25", "[2001:db8::1]:80") } - /** - * A Listener is a generic network listener for stream-oriented protocols. - * - * Multiple goroutines may invoke methods on a Listener simultaneously. - */ - interface Listener { - [key:string]: any; - /** - * Accept waits for and returns the next connection to the listener. - */ - accept(): Conn - /** - * Close closes the listener. - * Any blocked Accept operations will be unblocked and return errors. - */ - close(): void - /** - * Addr returns the listener's network address. - */ - addr(): Addr - } } /** @@ -21448,6 +21879,9 @@ namespace url { interface URL { marshalBinary(): string|Array } + interface URL { + appendBinary(b: string|Array): string|Array + } interface URL { unmarshalBinary(text: string|Array): void } @@ -21461,9 +21895,15 @@ namespace url { } } +namespace store { +} + namespace bufio { /** * Reader implements buffering for an io.Reader object. + * A new Reader is created by calling [NewReader] or [NewReaderSize]; + * alternatively the zero value of a Reader may be used after calling [Reset] + * on it. */ interface Reader { } @@ -21486,9 +21926,10 @@ namespace bufio { interface Reader { /** * Peek returns the next n bytes without advancing the reader. The bytes stop - * being valid at the next read call. If Peek returns fewer than n bytes, it - * also returns an error explaining why the read is short. The error is - * [ErrBufferFull] if n is larger than b's buffer size. + * being valid at the next read call. If necessary, Peek will read more bytes + * into the buffer in order to make n bytes available. If Peek returns fewer + * than n bytes, it also returns an error explaining why the read is short. + * The error is [ErrBufferFull] if n is larger than b's buffer size. * * Calling Peek prevents a [Reader.UnreadByte] or [Reader.UnreadRune] call from succeeding * until the next read operation. @@ -21666,58 +22107,261 @@ namespace bufio { } interface Writer { /** - * AvailableBuffer returns an empty buffer with b.Available() capacity. - * This buffer is intended to be appended to and - * passed to an immediately succeeding [Writer.Write] call. - * The buffer is only valid until the next write operation on b. + * AvailableBuffer returns an empty buffer with b.Available() capacity. + * This buffer is intended to be appended to and + * passed to an immediately succeeding [Writer.Write] call. + * The buffer is only valid until the next write operation on b. + */ + availableBuffer(): string|Array + } + interface Writer { + /** + * Buffered returns the number of bytes that have been written into the current buffer. + */ + buffered(): number + } + interface Writer { + /** + * Write writes the contents of p into the buffer. + * It returns the number of bytes written. + * If nn < len(p), it also returns an error explaining + * why the write is short. + */ + write(p: string|Array): number + } + interface Writer { + /** + * WriteByte writes a single byte. + */ + writeByte(c: number): void + } + interface Writer { + /** + * WriteRune writes a single Unicode code point, returning + * the number of bytes written and any error. + */ + writeRune(r: number): number + } + interface Writer { + /** + * WriteString writes a string. + * It returns the number of bytes written. + * If the count is less than len(s), it also returns an error explaining + * why the write is short. + */ + writeString(s: string): number + } + interface Writer { + /** + * ReadFrom implements [io.ReaderFrom]. If the underlying writer + * supports the ReadFrom method, this calls the underlying ReadFrom. + * If there is buffered data and an underlying ReadFrom, this fills + * the buffer and writes it before calling ReadFrom. + */ + readFrom(r: io.Reader): number + } +} + +namespace sql { + /** + * IsolationLevel is the transaction isolation level used in [TxOptions]. + */ + interface IsolationLevel extends Number{} + interface IsolationLevel { + /** + * String returns the name of the transaction isolation level. + */ + string(): string + } + /** + * DBStats contains database statistics. + */ + interface DBStats { + maxOpenConnections: number // Maximum number of open connections to the database. + /** + * Pool Status + */ + openConnections: number // The number of established connections both in use and idle. + inUse: number // The number of connections currently in use. + idle: number // The number of idle connections. + /** + * Counters + */ + waitCount: number // The total number of connections waited for. + waitDuration: time.Duration // The total time blocked waiting for a new connection. + maxIdleClosed: number // The total number of connections closed due to SetMaxIdleConns. + maxIdleTimeClosed: number // The total number of connections closed due to SetConnMaxIdleTime. + maxLifetimeClosed: number // The total number of connections closed due to SetConnMaxLifetime. + } + /** + * Conn represents a single database connection rather than a pool of database + * connections. Prefer running queries from [DB] unless there is a specific + * need for a continuous single database connection. + * + * A Conn must call [Conn.Close] to return the connection to the database pool + * and may do so concurrently with a running query. + * + * After a call to [Conn.Close], all operations on the + * connection fail with [ErrConnDone]. + */ + interface Conn { + } + interface Conn { + /** + * PingContext verifies the connection to the database is still alive. + */ + pingContext(ctx: context.Context): void + } + interface Conn { + /** + * ExecContext executes a query without returning any rows. + * The args are for any placeholder parameters in the query. + */ + execContext(ctx: context.Context, query: string, ...args: any[]): Result + } + interface Conn { + /** + * QueryContext executes a query that returns rows, typically a SELECT. + * The args are for any placeholder parameters in the query. + */ + queryContext(ctx: context.Context, query: string, ...args: any[]): (Rows) + } + interface Conn { + /** + * QueryRowContext executes a query that is expected to return at most one row. + * QueryRowContext always returns a non-nil value. Errors are deferred until + * the [*Row.Scan] method is called. + * If the query selects no rows, the [*Row.Scan] will return [ErrNoRows]. + * Otherwise, the [*Row.Scan] scans the first selected row and discards + * the rest. + */ + queryRowContext(ctx: context.Context, query: string, ...args: any[]): (Row) + } + interface Conn { + /** + * PrepareContext creates a prepared statement for later queries or executions. + * Multiple queries or executions may be run concurrently from the + * returned statement. + * The caller must call the statement's [*Stmt.Close] method + * when the statement is no longer needed. + * + * The provided context is used for the preparation of the statement, not for the + * execution of the statement. + */ + prepareContext(ctx: context.Context, query: string): (Stmt) + } + interface Conn { + /** + * Raw executes f exposing the underlying driver connection for the + * duration of f. The driverConn must not be used outside of f. + * + * Once f returns and err is not [driver.ErrBadConn], the [Conn] will continue to be usable + * until [Conn.Close] is called. + */ + raw(f: (driverConn: any) => void): void + } + interface Conn { + /** + * BeginTx starts a transaction. + * + * The provided context is used until the transaction is committed or rolled back. + * If the context is canceled, the sql package will roll back + * the transaction. [Tx.Commit] will return an error if the context provided to + * BeginTx is canceled. + * + * The provided [TxOptions] is optional and may be nil if defaults should be used. + * If a non-default isolation level is used that the driver doesn't support, + * an error will be returned. + */ + beginTx(ctx: context.Context, opts: TxOptions): (Tx) + } + interface Conn { + /** + * Close returns the connection to the connection pool. + * All operations after a Close will return with [ErrConnDone]. + * Close is safe to call concurrently with other operations and will + * block until all other operations finish. It may be useful to first + * cancel any used context and then call close directly after. + */ + close(): void + } + /** + * ColumnType contains the name and type of a column. + */ + interface ColumnType { + } + interface ColumnType { + /** + * Name returns the name or alias of the column. + */ + name(): string + } + interface ColumnType { + /** + * Length returns the column type length for variable length column types such + * as text and binary field types. If the type length is unbounded the value will + * be [math.MaxInt64] (any database limits will still apply). + * If the column type is not variable length, such as an int, or if not supported + * by the driver ok is false. */ - availableBuffer(): string|Array + length(): [number, boolean] } - interface Writer { + interface ColumnType { /** - * Buffered returns the number of bytes that have been written into the current buffer. + * DecimalSize returns the scale and precision of a decimal type. + * If not applicable or if not supported ok is false. */ - buffered(): number + decimalSize(): [number, number, boolean] } - interface Writer { + interface ColumnType { /** - * Write writes the contents of p into the buffer. - * It returns the number of bytes written. - * If nn < len(p), it also returns an error explaining - * why the write is short. + * ScanType returns a Go type suitable for scanning into using [Rows.Scan]. + * If a driver does not support this property ScanType will return + * the type of an empty interface. */ - write(p: string|Array): number + scanType(): any } - interface Writer { + interface ColumnType { /** - * WriteByte writes a single byte. + * Nullable reports whether the column may be null. + * If a driver does not support this property ok will be false. */ - writeByte(c: number): void + nullable(): [boolean, boolean] } - interface Writer { + interface ColumnType { /** - * WriteRune writes a single Unicode code point, returning - * the number of bytes written and any error. + * DatabaseTypeName returns the database system name of the column type. If an empty + * string is returned, then the driver type name is not supported. + * Consult your driver documentation for a list of driver data types. [ColumnType.Length] specifiers + * are not included. + * Common type names include "VARCHAR", "TEXT", "NVARCHAR", "DECIMAL", "BOOL", + * "INT", and "BIGINT". */ - writeRune(r: number): number + databaseTypeName(): string } - interface Writer { + /** + * Row is the result of calling [DB.QueryRow] to select a single row. + */ + interface Row { + } + interface Row { /** - * WriteString writes a string. - * It returns the number of bytes written. - * If the count is less than len(s), it also returns an error explaining - * why the write is short. + * Scan copies the columns from the matched row into the values + * pointed at by dest. See the documentation on [Rows.Scan] for details. + * If more than one row matches the query, + * Scan uses the first row and discards the rest. If no row matches + * the query, Scan returns [ErrNoRows]. */ - writeString(s: string): number + scan(...dest: any[]): void } - interface Writer { + interface Row { /** - * ReadFrom implements [io.ReaderFrom]. If the underlying writer - * supports the ReadFrom method, this calls the underlying ReadFrom. - * If there is buffered data and an underlying ReadFrom, this fills - * the buffer and writes it before calling ReadFrom. + * Err provides a way for wrapping packages to check for + * query errors without calling [Row.Scan]. + * Err returns the error, if any, that was encountered while running the query. + * If this error is not nil, this error will also be returned from [Row.Scan]. */ - readFrom(r: io.Reader): number + err(): void } } @@ -21980,6 +22624,140 @@ namespace http { */ writeSubset(w: io.Writer, exclude: _TygojaDict): void } + /** + * Protocols is a set of HTTP protocols. + * The zero value is an empty set of protocols. + * + * The supported protocols are: + * + * ``` + * - HTTP1 is the HTTP/1.0 and HTTP/1.1 protocols. + * HTTP1 is supported on both unsecured TCP and secured TLS connections. + * + * - HTTP2 is the HTTP/2 protcol over a TLS connection. + * + * - UnencryptedHTTP2 is the HTTP/2 protocol over an unsecured TCP connection. + * ``` + */ + interface Protocols { + } + interface Protocols { + /** + * HTTP1 reports whether p includes HTTP/1. + */ + http1(): boolean + } + interface Protocols { + /** + * SetHTTP1 adds or removes HTTP/1 from p. + */ + setHTTP1(ok: boolean): void + } + interface Protocols { + /** + * HTTP2 reports whether p includes HTTP/2. + */ + http2(): boolean + } + interface Protocols { + /** + * SetHTTP2 adds or removes HTTP/2 from p. + */ + setHTTP2(ok: boolean): void + } + interface Protocols { + /** + * UnencryptedHTTP2 reports whether p includes unencrypted HTTP/2. + */ + unencryptedHTTP2(): boolean + } + interface Protocols { + /** + * SetUnencryptedHTTP2 adds or removes unencrypted HTTP/2 from p. + */ + setUnencryptedHTTP2(ok: boolean): void + } + interface Protocols { + string(): string + } + /** + * HTTP2Config defines HTTP/2 configuration parameters common to + * both [Transport] and [Server]. + */ + interface HTTP2Config { + /** + * MaxConcurrentStreams optionally specifies the number of + * concurrent streams that a peer may have open at a time. + * If zero, MaxConcurrentStreams defaults to at least 100. + */ + maxConcurrentStreams: number + /** + * MaxDecoderHeaderTableSize optionally specifies an upper limit for the + * size of the header compression table used for decoding headers sent + * by the peer. + * A valid value is less than 4MiB. + * If zero or invalid, a default value is used. + */ + maxDecoderHeaderTableSize: number + /** + * MaxEncoderHeaderTableSize optionally specifies an upper limit for the + * header compression table used for sending headers to the peer. + * A valid value is less than 4MiB. + * If zero or invalid, a default value is used. + */ + maxEncoderHeaderTableSize: number + /** + * MaxReadFrameSize optionally specifies the largest frame + * this endpoint is willing to read. + * A valid value is between 16KiB and 16MiB, inclusive. + * If zero or invalid, a default value is used. + */ + maxReadFrameSize: number + /** + * MaxReceiveBufferPerConnection is the maximum size of the + * flow control window for data received on a connection. + * A valid value is at least 64KiB and less than 4MiB. + * If invalid, a default value is used. + */ + maxReceiveBufferPerConnection: number + /** + * MaxReceiveBufferPerStream is the maximum size of + * the flow control window for data received on a stream (request). + * A valid value is less than 4MiB. + * If zero or invalid, a default value is used. + */ + maxReceiveBufferPerStream: number + /** + * SendPingTimeout is the timeout after which a health check using a ping + * frame will be carried out if no frame is received on a connection. + * If zero, no health check is performed. + */ + sendPingTimeout: time.Duration + /** + * PingTimeout is the timeout after which a connection will be closed + * if a response to a ping is not received. + * If zero, a default of 15 seconds is used. + */ + pingTimeout: time.Duration + /** + * WriteByteTimeout is the timeout after which a connection will be + * closed if no data can be written to it. The timeout begins when data is + * available to write, and is extended whenever any bytes are written. + */ + writeByteTimeout: time.Duration + /** + * PermitProhibitedCipherSuites, if true, permits the use of + * cipher suites prohibited by the HTTP/2 spec. + */ + permitProhibitedCipherSuites: boolean + /** + * CountError, if non-nil, is called on HTTP/2 errors. + * It is intended to increment a metric for monitoring. + * The errType contains only lowercase letters, digits, and underscores + * (a-z, 0-9, _). + */ + countError: (errType: string) => void + } // @ts-ignore import urlpkg = url /** @@ -22102,142 +22880,43 @@ namespace http { location(): (url.URL) } interface Response { - /** - * ProtoAtLeast reports whether the HTTP protocol used - * in the response is at least major.minor. - */ - protoAtLeast(major: number, minor: number): boolean - } - interface Response { - /** - * Write writes r to w in the HTTP/1.x server response format, - * including the status line, headers, body, and optional trailer. - * - * This method consults the following fields of the response r: - * - * ``` - * StatusCode - * ProtoMajor - * ProtoMinor - * Request.Method - * TransferEncoding - * Trailer - * Body - * ContentLength - * Header, values for non-canonical keys will have unpredictable behavior - * ``` - * - * The Response Body is closed after it is sent. - */ - write(w: io.Writer): void - } - /** - * A ConnState represents the state of a client connection to a server. - * It's used by the optional [Server.ConnState] hook. - */ - interface ConnState extends Number{} - interface ConnState { - string(): string - } -} - -/** - * Package oauth2 provides support for making - * OAuth2 authorized and authenticated HTTP requests, - * as specified in RFC 6749. - * It can additionally grant authorization with Bearer JWT. - */ -namespace oauth2 { - /** - * An AuthCodeOption is passed to Config.AuthCodeURL. - */ - interface AuthCodeOption { - [key:string]: any; - } - /** - * Token represents the credentials used to authorize - * the requests to access protected resources on the OAuth 2.0 - * provider's backend. - * - * Most users of this package should not access fields of Token - * directly. They're exported mostly for use by related packages - * implementing derivative OAuth2 flows. - */ - interface Token { - /** - * AccessToken is the token that authorizes and authenticates - * the requests. - */ - accessToken: string - /** - * TokenType is the type of token. - * The Type method returns either this or "Bearer", the default. - */ - tokenType: string - /** - * RefreshToken is a token that's used by the application - * (as opposed to the user) to refresh the access token - * if it expires. - */ - refreshToken: string - /** - * Expiry is the optional expiration time of the access token. - * - * If zero, [TokenSource] implementations will reuse the same - * token forever and RefreshToken or equivalent - * mechanisms for that TokenSource will not be used. - */ - expiry: time.Time - /** - * ExpiresIn is the OAuth2 wire format "expires_in" field, - * which specifies how many seconds later the token expires, - * relative to an unknown time base approximately around "now". - * It is the application's responsibility to populate - * `Expiry` from `ExpiresIn` when required. - */ - expiresIn: number - } - interface Token { - /** - * Type returns t.TokenType if non-empty, else "Bearer". - */ - type(): string - } - interface Token { - /** - * SetAuthHeader sets the Authorization header to r using the access - * token in t. - * - * This method is unnecessary when using [Transport] or an HTTP Client - * returned by this package. - */ - setAuthHeader(r: http.Request): void - } - interface Token { - /** - * WithExtra returns a new [Token] that's a clone of t, but using the - * provided raw extra map. This is only intended for use by packages - * implementing derivative OAuth2 flows. - */ - withExtra(extra: any): (Token) - } - interface Token { - /** - * Extra returns an extra field. - * Extra fields are key-value pairs returned by the server as a - * part of the token retrieval response. + /** + * ProtoAtLeast reports whether the HTTP protocol used + * in the response is at least major.minor. */ - extra(key: string): any + protoAtLeast(major: number, minor: number): boolean } - interface Token { + interface Response { /** - * Valid reports whether t is non-nil, has an AccessToken, and is not expired. + * Write writes r to w in the HTTP/1.x server response format, + * including the status line, headers, body, and optional trailer. + * + * This method consults the following fields of the response r: + * + * ``` + * StatusCode + * ProtoMajor + * ProtoMinor + * Request.Method + * TransferEncoding + * Trailer + * Body + * ContentLength + * Header, values for non-canonical keys will have unpredictable behavior + * ``` + * + * The Response Body is closed after it is sent. */ - valid(): boolean + write(w: io.Writer): void + } + /** + * A ConnState represents the state of a client connection to a server. + * It's used by the optional [Server.ConnState] hook. + */ + interface ConnState extends Number{} + interface ConnState { + string(): string } -} - -namespace store { } namespace jwt { @@ -22245,8 +22924,8 @@ namespace jwt { * NumericDate represents a JSON numeric date value, as referenced at * https://datatracker.ietf.org/doc/html/rfc7519#section-2. */ - type _soCkAUH = time.Time - interface NumericDate extends _soCkAUH { + type _sjrbocb = time.Time + interface NumericDate extends _sjrbocb { } interface NumericDate { /** @@ -22278,216 +22957,326 @@ namespace jwt { } } -namespace subscriptions { +namespace hook { + /** + * wrapped local Hook embedded struct to limit the public API surface. + */ + type _sSNkRNc = Hook + interface mainHook extends _sSNkRNc { + } } -namespace sql { +namespace types { +} + +namespace search { +} + +namespace router { + // @ts-ignore + import validation = ozzo_validation /** - * IsolationLevel is the transaction isolation level used in [TxOptions]. + * RouterGroup represents a collection of routes and other sub groups + * that share common pattern prefix and middlewares. */ - interface IsolationLevel extends Number{} - interface IsolationLevel { + interface RouterGroup { + prefix: string + middlewares: Array<(hook.Handler | undefined)> + } + interface RouterGroup { /** - * String returns the name of the transaction isolation level. + * Group creates and register a new child Group into the current one + * with the specified prefix. + * + * The prefix follows the standard Go net/http ServeMux pattern format ("[HOST]/[PATH]") + * and will be concatenated recursively into the final route path, meaning that + * only the root level group could have HOST as part of the prefix. + * + * Returns the newly created group to allow chaining and registering + * sub-routes and group specific middlewares. */ - string(): string + group(prefix: string): (RouterGroup) } - /** - * DBStats contains database statistics. - */ - interface DBStats { - maxOpenConnections: number // Maximum number of open connections to the database. + interface RouterGroup { /** - * Pool Status + * BindFunc registers one or multiple middleware functions to the current group. + * + * The registered middleware functions are "anonymous" and with default priority, + * aka. executes in the order they were registered. + * + * If you need to specify a named middleware (ex. so that it can be removed) + * or middleware with custom exec prirority, use [RouterGroup.Bind] method. */ - openConnections: number // The number of established connections both in use and idle. - inUse: number // The number of connections currently in use. - idle: number // The number of idle connections. + bindFunc(...middlewareFuncs: ((e: T) => void)[]): (RouterGroup) + } + interface RouterGroup { /** - * Counters + * Bind registers one or multiple middleware handlers to the current group. */ - waitCount: number // The total number of connections waited for. - waitDuration: time.Duration // The total time blocked waiting for a new connection. - maxIdleClosed: number // The total number of connections closed due to SetMaxIdleConns. - maxIdleTimeClosed: number // The total number of connections closed due to SetConnMaxIdleTime. - maxLifetimeClosed: number // The total number of connections closed due to SetConnMaxLifetime. + bind(...middlewares: (hook.Handler | undefined)[]): (RouterGroup) } - /** - * Conn represents a single database connection rather than a pool of database - * connections. Prefer running queries from [DB] unless there is a specific - * need for a continuous single database connection. - * - * A Conn must call [Conn.Close] to return the connection to the database pool - * and may do so concurrently with a running query. - * - * After a call to [Conn.Close], all operations on the - * connection fail with [ErrConnDone]. - */ - interface Conn { + interface RouterGroup { + /** + * Unbind removes one or more middlewares with the specified id(s) + * from the current group and its children (if any). + * + * Anonymous middlewares are not removable, aka. this method does nothing + * if the middleware id is an empty string. + */ + unbind(...middlewareIds: string[]): (RouterGroup) } - interface Conn { + interface RouterGroup { /** - * PingContext verifies the connection to the database is still alive. + * Route registers a single route into the current group. + * + * Note that the final route path will be the concatenation of all parent groups prefixes + the route path. + * The path follows the standard Go net/http ServeMux format ("[HOST]/[PATH]"), + * meaning that only a top level group route could have HOST as part of the prefix. + * + * Returns the newly created route to allow attaching route-only middlewares. */ - pingContext(ctx: context.Context): void + route(method: string, path: string, action: (e: T) => void): (Route) } - interface Conn { + interface RouterGroup { /** - * ExecContext executes a query without returning any rows. - * The args are for any placeholder parameters in the query. + * Any is a shorthand for [RouterGroup.AddRoute] with "" as route method (aka. matches any method). */ - execContext(ctx: context.Context, query: string, ...args: any[]): Result + any(path: string, action: (e: T) => void): (Route) } - interface Conn { + interface RouterGroup { /** - * QueryContext executes a query that returns rows, typically a SELECT. - * The args are for any placeholder parameters in the query. + * GET is a shorthand for [RouterGroup.AddRoute] with GET as route method. */ - queryContext(ctx: context.Context, query: string, ...args: any[]): (Rows) + get(path: string, action: (e: T) => void): (Route) } - interface Conn { + interface RouterGroup { /** - * QueryRowContext executes a query that is expected to return at most one row. - * QueryRowContext always returns a non-nil value. Errors are deferred until - * the [*Row.Scan] method is called. - * If the query selects no rows, the [*Row.Scan] will return [ErrNoRows]. - * Otherwise, the [*Row.Scan] scans the first selected row and discards - * the rest. + * SEARCH is a shorthand for [RouterGroup.AddRoute] with SEARCH as route method. */ - queryRowContext(ctx: context.Context, query: string, ...args: any[]): (Row) + search(path: string, action: (e: T) => void): (Route) } - interface Conn { + interface RouterGroup { /** - * PrepareContext creates a prepared statement for later queries or executions. - * Multiple queries or executions may be run concurrently from the - * returned statement. - * The caller must call the statement's [*Stmt.Close] method - * when the statement is no longer needed. + * POST is a shorthand for [RouterGroup.AddRoute] with POST as route method. + */ + post(path: string, action: (e: T) => void): (Route) + } + interface RouterGroup { + /** + * DELETE is a shorthand for [RouterGroup.AddRoute] with DELETE as route method. + */ + delete(path: string, action: (e: T) => void): (Route) + } + interface RouterGroup { + /** + * PATCH is a shorthand for [RouterGroup.AddRoute] with PATCH as route method. + */ + patch(path: string, action: (e: T) => void): (Route) + } + interface RouterGroup { + /** + * PUT is a shorthand for [RouterGroup.AddRoute] with PUT as route method. + */ + put(path: string, action: (e: T) => void): (Route) + } + interface RouterGroup { + /** + * HEAD is a shorthand for [RouterGroup.AddRoute] with HEAD as route method. + */ + head(path: string, action: (e: T) => void): (Route) + } + interface RouterGroup { + /** + * OPTIONS is a shorthand for [RouterGroup.AddRoute] with OPTIONS as route method. + */ + options(path: string, action: (e: T) => void): (Route) + } + interface RouterGroup { + /** + * HasRoute checks whether the specified route pattern (method + path) + * is registered in the current group or its children. * - * The provided context is used for the preparation of the statement, not for the - * execution of the statement. + * This could be useful to conditionally register and checks for routes + * in order prevent panic on duplicated routes. + * + * Note that routes with anonymous and named wildcard placeholder are treated as equal, + * aka. "GET /abc/" is considered the same as "GET /abc/{something...}". */ - prepareContext(ctx: context.Context, query: string): (Stmt) + hasRoute(method: string, path: string): boolean } - interface Conn { +} + +namespace slog { + /** + * An Attr is a key-value pair. + */ + interface Attr { + key: string + value: Value + } + interface Attr { /** - * Raw executes f exposing the underlying driver connection for the - * duration of f. The driverConn must not be used outside of f. + * Equal reports whether a and b have equal keys and values. + */ + equal(b: Attr): boolean + } + interface Attr { + string(): string + } + /** + * A Handler handles log records produced by a Logger. + * + * A typical handler may print log records to standard error, + * or write them to a file or database, or perhaps augment them + * with additional attributes and pass them on to another handler. + * + * Any of the Handler's methods may be called concurrently with itself + * or with other methods. It is the responsibility of the Handler to + * manage this concurrency. + * + * Users of the slog package should not invoke Handler methods directly. + * They should use the methods of [Logger] instead. + */ + interface Handler { + [key:string]: any; + /** + * Enabled reports whether the handler handles records at the given level. + * The handler ignores records whose level is lower. + * It is called early, before any arguments are processed, + * to save effort if the log event should be discarded. + * If called from a Logger method, the first argument is the context + * passed to that method, or context.Background() if nil was passed + * or the method does not take a context. + * The context is passed so Enabled can use its values + * to make a decision. + */ + enabled(_arg0: context.Context, _arg1: Level): boolean + /** + * Handle handles the Record. + * It will only be called when Enabled returns true. + * The Context argument is as for Enabled. + * It is present solely to provide Handlers access to the context's values. + * Canceling the context should not affect record processing. + * (Among other things, log messages may be necessary to debug a + * cancellation-related problem.) + * + * Handle methods that produce output should observe the following rules: + * ``` + * - If r.Time is the zero time, ignore the time. + * - If r.PC is zero, ignore it. + * - Attr's values should be resolved. + * - If an Attr's key and value are both the zero value, ignore the Attr. + * This can be tested with attr.Equal(Attr{}). + * - If a group's key is empty, inline the group's Attrs. + * - If a group has no Attrs (even if it has a non-empty key), + * ignore it. + * ``` + */ + handle(_arg0: context.Context, _arg1: Record): void + /** + * WithAttrs returns a new Handler whose attributes consist of + * both the receiver's attributes and the arguments. + * The Handler owns the slice: it may retain, modify or discard it. + */ + withAttrs(attrs: Array): Handler + /** + * WithGroup returns a new Handler with the given group appended to + * the receiver's existing groups. + * The keys of all subsequent attributes, whether added by With or in a + * Record, should be qualified by the sequence of group names. + * + * How this qualification happens is up to the Handler, so long as + * this Handler's attribute keys differ from those of another Handler + * with a different sequence of group names. + * + * A Handler should treat WithGroup as starting a Group of Attrs that ends + * at the end of the log event. That is, + * + * ``` + * logger.WithGroup("s").LogAttrs(ctx, level, msg, slog.Int("a", 1), slog.Int("b", 2)) + * ``` * - * Once f returns and err is not [driver.ErrBadConn], the [Conn] will continue to be usable - * until [Conn.Close] is called. - */ - raw(f: (driverConn: any) => void): void - } - interface Conn { - /** - * BeginTx starts a transaction. + * should behave like * - * The provided context is used until the transaction is committed or rolled back. - * If the context is canceled, the sql package will roll back - * the transaction. [Tx.Commit] will return an error if the context provided to - * BeginTx is canceled. + * ``` + * logger.LogAttrs(ctx, level, msg, slog.Group("s", slog.Int("a", 1), slog.Int("b", 2))) + * ``` * - * The provided [TxOptions] is optional and may be nil if defaults should be used. - * If a non-default isolation level is used that the driver doesn't support, - * an error will be returned. - */ - beginTx(ctx: context.Context, opts: TxOptions): (Tx) - } - interface Conn { - /** - * Close returns the connection to the connection pool. - * All operations after a Close will return with [ErrConnDone]. - * Close is safe to call concurrently with other operations and will - * block until all other operations finish. It may be useful to first - * cancel any used context and then call close directly after. + * If the name is empty, WithGroup returns the receiver. */ - close(): void + withGroup(name: string): Handler } /** - * ColumnType contains the name and type of a column. + * A Level is the importance or severity of a log event. + * The higher the level, the more important or severe the event. */ - interface ColumnType { - } - interface ColumnType { - /** - * Name returns the name or alias of the column. - */ - name(): string - } - interface ColumnType { + interface Level extends Number{} + interface Level { /** - * Length returns the column type length for variable length column types such - * as text and binary field types. If the type length is unbounded the value will - * be [math.MaxInt64] (any database limits will still apply). - * If the column type is not variable length, such as an int, or if not supported - * by the driver ok is false. + * String returns a name for the level. + * If the level has a name, then that name + * in uppercase is returned. + * If the level is between named values, then + * an integer is appended to the uppercased name. + * Examples: + * + * ``` + * LevelWarn.String() => "WARN" + * (LevelInfo+2).String() => "INFO+2" + * ``` */ - length(): [number, boolean] + string(): string } - interface ColumnType { + interface Level { /** - * DecimalSize returns the scale and precision of a decimal type. - * If not applicable or if not supported ok is false. + * MarshalJSON implements [encoding/json.Marshaler] + * by quoting the output of [Level.String]. */ - decimalSize(): [number, number, boolean] + marshalJSON(): string|Array } - interface ColumnType { + interface Level { /** - * ScanType returns a Go type suitable for scanning into using [Rows.Scan]. - * If a driver does not support this property ScanType will return - * the type of an empty interface. + * UnmarshalJSON implements [encoding/json.Unmarshaler] + * It accepts any string produced by [Level.MarshalJSON], + * ignoring case. + * It also accepts numeric offsets that would result in a different string on + * output. For example, "Error-8" would marshal as "INFO". */ - scanType(): any + unmarshalJSON(data: string|Array): void } - interface ColumnType { + interface Level { /** - * Nullable reports whether the column may be null. - * If a driver does not support this property ok will be false. + * AppendText implements [encoding.TextAppender] + * by calling [Level.String]. */ - nullable(): [boolean, boolean] + appendText(b: string|Array): string|Array } - interface ColumnType { + interface Level { /** - * DatabaseTypeName returns the database system name of the column type. If an empty - * string is returned, then the driver type name is not supported. - * Consult your driver documentation for a list of driver data types. [ColumnType.Length] specifiers - * are not included. - * Common type names include "VARCHAR", "TEXT", "NVARCHAR", "DECIMAL", "BOOL", - * "INT", and "BIGINT". + * MarshalText implements [encoding.TextMarshaler] + * by calling [Level.AppendText]. */ - databaseTypeName(): string - } - /** - * Row is the result of calling [DB.QueryRow] to select a single row. - */ - interface Row { + marshalText(): string|Array } - interface Row { + interface Level { /** - * Scan copies the columns from the matched row into the values - * pointed at by dest. See the documentation on [Rows.Scan] for details. - * If more than one row matches the query, - * Scan uses the first row and discards the rest. If no row matches - * the query, Scan returns [ErrNoRows]. + * UnmarshalText implements [encoding.TextUnmarshaler]. + * It accepts any string produced by [Level.MarshalText], + * ignoring case. + * It also accepts numeric offsets that would result in a different string on + * output. For example, "Error-8" would marshal as "INFO". */ - scan(...dest: any[]): void + unmarshalText(data: string|Array): void } - interface Row { + interface Level { /** - * Err provides a way for wrapping packages to check for - * query errors without calling [Row.Scan]. - * Err returns the error, if any, that was encountered while running the query. - * If this error is not nil, this error will also be returned from [Row.Scan]. + * Level returns the receiver. + * It implements [Leveler]. */ - err(): void + level(): Level } -} - -namespace types { -} - -namespace search { + // @ts-ignore + import loginternal = internal } namespace cobra { @@ -22527,6 +23316,14 @@ namespace cobra { * HiddenDefaultCmd makes the default 'completion' command hidden */ hiddenDefaultCmd: boolean + /** + * DefaultShellCompDirective sets the ShellCompDirective that is returned + * if no special directive can be determined + */ + defaultShellCompDirective?: ShellCompDirective + } + interface CompletionOptions { + setDefaultShellCompDirective(directive: ShellCompDirective): void } /** * Completion is a string that can be used for completions @@ -22548,254 +23345,135 @@ namespace cobra { interface CompletionFunc {(cmd: Command, args: Array, toComplete: string): [Array, ShellCompDirective] } } -namespace cron { - /** - * Job defines a single registered cron job. - */ - interface Job { - } - interface Job { - /** - * Id returns the cron job id. - */ - id(): string - } - interface Job { - /** - * Expression returns the plain cron job schedule expression. - */ - expression(): string - } - interface Job { - /** - * Run runs the cron job function. - */ - run(): void - } - interface Job { - /** - * MarshalJSON implements [json.Marshaler] and export the current - * jobs data into valid JSON. - */ - marshalJSON(): string|Array - } -} - -namespace hook { - /** - * wrapped local Hook embedded struct to limit the public API surface. - */ - type _sAnQlNo = Hook - interface mainHook extends _sAnQlNo { - } -} - -namespace slog { - /** - * An Attr is a key-value pair. - */ - interface Attr { - key: string - value: Value - } - interface Attr { - /** - * Equal reports whether a and b have equal keys and values. - */ - equal(b: Attr): boolean - } - interface Attr { - string(): string - } +/** + * Package oauth2 provides support for making + * OAuth2 authorized and authenticated HTTP requests, + * as specified in RFC 6749. + * It can additionally grant authorization with Bearer JWT. + */ +namespace oauth2 { /** - * A Handler handles log records produced by a Logger. - * - * A typical handler may print log records to standard error, - * or write them to a file or database, or perhaps augment them - * with additional attributes and pass them on to another handler. - * - * Any of the Handler's methods may be called concurrently with itself - * or with other methods. It is the responsibility of the Handler to - * manage this concurrency. - * - * Users of the slog package should not invoke Handler methods directly. - * They should use the methods of [Logger] instead. + * An AuthCodeOption is passed to Config.AuthCodeURL. */ - interface Handler { + interface AuthCodeOption { [key:string]: any; - /** - * Enabled reports whether the handler handles records at the given level. - * The handler ignores records whose level is lower. - * It is called early, before any arguments are processed, - * to save effort if the log event should be discarded. - * If called from a Logger method, the first argument is the context - * passed to that method, or context.Background() if nil was passed - * or the method does not take a context. - * The context is passed so Enabled can use its values - * to make a decision. - */ - enabled(_arg0: context.Context, _arg1: Level): boolean - /** - * Handle handles the Record. - * It will only be called when Enabled returns true. - * The Context argument is as for Enabled. - * It is present solely to provide Handlers access to the context's values. - * Canceling the context should not affect record processing. - * (Among other things, log messages may be necessary to debug a - * cancellation-related problem.) - * - * Handle methods that produce output should observe the following rules: - * ``` - * - If r.Time is the zero time, ignore the time. - * - If r.PC is zero, ignore it. - * - Attr's values should be resolved. - * - If an Attr's key and value are both the zero value, ignore the Attr. - * This can be tested with attr.Equal(Attr{}). - * - If a group's key is empty, inline the group's Attrs. - * - If a group has no Attrs (even if it has a non-empty key), - * ignore it. - * ``` + } + /** + * Token represents the credentials used to authorize + * the requests to access protected resources on the OAuth 2.0 + * provider's backend. + * + * Most users of this package should not access fields of Token + * directly. They're exported mostly for use by related packages + * implementing derivative OAuth2 flows. + */ + interface Token { + /** + * AccessToken is the token that authorizes and authenticates + * the requests. */ - handle(_arg0: context.Context, _arg1: Record): void + accessToken: string /** - * WithAttrs returns a new Handler whose attributes consist of - * both the receiver's attributes and the arguments. - * The Handler owns the slice: it may retain, modify or discard it. + * TokenType is the type of token. + * The Type method returns either this or "Bearer", the default. */ - withAttrs(attrs: Array): Handler + tokenType: string /** - * WithGroup returns a new Handler with the given group appended to - * the receiver's existing groups. - * The keys of all subsequent attributes, whether added by With or in a - * Record, should be qualified by the sequence of group names. - * - * How this qualification happens is up to the Handler, so long as - * this Handler's attribute keys differ from those of another Handler - * with a different sequence of group names. - * - * A Handler should treat WithGroup as starting a Group of Attrs that ends - * at the end of the log event. That is, - * - * ``` - * logger.WithGroup("s").LogAttrs(ctx, level, msg, slog.Int("a", 1), slog.Int("b", 2)) - * ``` - * - * should behave like - * - * ``` - * logger.LogAttrs(ctx, level, msg, slog.Group("s", slog.Int("a", 1), slog.Int("b", 2))) - * ``` - * - * If the name is empty, WithGroup returns the receiver. + * RefreshToken is a token that's used by the application + * (as opposed to the user) to refresh the access token + * if it expires. */ - withGroup(name: string): Handler - } - /** - * A Level is the importance or severity of a log event. - * The higher the level, the more important or severe the event. - */ - interface Level extends Number{} - interface Level { + refreshToken: string /** - * String returns a name for the level. - * If the level has a name, then that name - * in uppercase is returned. - * If the level is between named values, then - * an integer is appended to the uppercased name. - * Examples: + * Expiry is the optional expiration time of the access token. * - * ``` - * LevelWarn.String() => "WARN" - * (LevelInfo+2).String() => "INFO+2" - * ``` + * If zero, [TokenSource] implementations will reuse the same + * token forever and RefreshToken or equivalent + * mechanisms for that TokenSource will not be used. */ - string(): string + expiry: time.Time + /** + * ExpiresIn is the OAuth2 wire format "expires_in" field, + * which specifies how many seconds later the token expires, + * relative to an unknown time base approximately around "now". + * It is the application's responsibility to populate + * `Expiry` from `ExpiresIn` when required. + */ + expiresIn: number } - interface Level { + interface Token { /** - * MarshalJSON implements [encoding/json.Marshaler] - * by quoting the output of [Level.String]. + * Type returns t.TokenType if non-empty, else "Bearer". */ - marshalJSON(): string|Array + type(): string } - interface Level { + interface Token { /** - * UnmarshalJSON implements [encoding/json.Unmarshaler] - * It accepts any string produced by [Level.MarshalJSON], - * ignoring case. - * It also accepts numeric offsets that would result in a different string on - * output. For example, "Error-8" would marshal as "INFO". + * SetAuthHeader sets the Authorization header to r using the access + * token in t. + * + * This method is unnecessary when using [Transport] or an HTTP Client + * returned by this package. */ - unmarshalJSON(data: string|Array): void + setAuthHeader(r: http.Request): void } - interface Level { + interface Token { /** - * MarshalText implements [encoding.TextMarshaler] - * by calling [Level.String]. + * WithExtra returns a new [Token] that's a clone of t, but using the + * provided raw extra map. This is only intended for use by packages + * implementing derivative OAuth2 flows. */ - marshalText(): string|Array + withExtra(extra: any): (Token) } - interface Level { + interface Token { /** - * UnmarshalText implements [encoding.TextUnmarshaler]. - * It accepts any string produced by [Level.MarshalText], - * ignoring case. - * It also accepts numeric offsets that would result in a different string on - * output. For example, "Error-8" would marshal as "INFO". + * Extra returns an extra field. + * Extra fields are key-value pairs returned by the server as a + * part of the token retrieval response. */ - unmarshalText(data: string|Array): void + extra(key: string): any } - interface Level { + interface Token { /** - * Level returns the receiver. - * It implements [Leveler]. + * Valid reports whether t is non-nil, has an AccessToken, and is not expired. */ - level(): Level + valid(): boolean } - // @ts-ignore - import loginternal = internal } -namespace router { - // @ts-ignore - import validation = ozzo_validation - /** - * RouterGroup represents a collection of routes and other sub groups - * that share common pattern prefix and middlewares. - */ - interface RouterGroup { - prefix: string - middlewares: Array<(hook.Handler | undefined)> - } +namespace subscriptions { } -namespace url { +namespace cron { /** - * The Userinfo type is an immutable encapsulation of username and - * password details for a [URL]. An existing Userinfo value is guaranteed - * to have a username set (potentially empty, as allowed by RFC 2396), - * and optionally a password. + * Job defines a single registered cron job. */ - interface Userinfo { + interface Job { } - interface Userinfo { + interface Job { /** - * Username returns the username. + * Id returns the cron job id. */ - username(): string + id(): string } - interface Userinfo { + interface Job { /** - * Password returns the password in case it is set, and whether it is set. + * Expression returns the plain cron job schedule expression. */ - password(): [string, boolean] + expression(): string } - interface Userinfo { + interface Job { /** - * String returns the encoded userinfo information in the standard form - * of "username[:password]". + * Run runs the cron job function. */ - string(): string + run(): void + } + interface Job { + /** + * MarshalJSON implements [json.Marshaler] and export the current + * jobs data into valid JSON. + */ + marshalJSON(): string|Array } } @@ -22838,6 +23516,36 @@ namespace multipart { } } +namespace url { + /** + * The Userinfo type is an immutable encapsulation of username and + * password details for a [URL]. An existing Userinfo value is guaranteed + * to have a username set (potentially empty, as allowed by RFC 2396), + * and optionally a password. + */ + interface Userinfo { + } + interface Userinfo { + /** + * Username returns the username. + */ + username(): string + } + interface Userinfo { + /** + * Password returns the password in case it is set, and whether it is set. + */ + password(): [string, boolean] + } + interface Userinfo { + /** + * String returns the encoded userinfo information in the standard form + * of "username[:password]". + */ + string(): string + } +} + namespace http { /** * SameSite allows a server to define a cookie attribute making it impossible for @@ -22854,17 +23562,45 @@ namespace http { import urlpkg = url } -namespace oauth2 { -} - -namespace cobra { +namespace router { // @ts-ignore - import flag = pflag - /** - * ShellCompDirective is a bit map representing the different behaviors the shell - * can be instructed to have once completions have been provided. - */ - interface ShellCompDirective extends Number{} + import validation = ozzo_validation + interface Route { + action: (e: T) => void + method: string + path: string + middlewares: Array<(hook.Handler | undefined)> + } + interface Route { + /** + * BindFunc registers one or multiple middleware functions to the current route. + * + * The registered middleware functions are "anonymous" and with default priority, + * aka. executes in the order they were registered. + * + * If you need to specify a named middleware (ex. so that it can be removed) + * or middleware with custom exec prirority, use the [Route.Bind] method. + */ + bindFunc(...middlewareFuncs: ((e: T) => void)[]): (Route) + } + interface Route { + /** + * Bind registers one or multiple middleware handlers to the current route. + */ + bind(...middlewares: (hook.Handler | undefined)[]): (Route) + } + interface Route { + /** + * Unbind removes one or more middlewares with the specified id(s) from the current route. + * + * It also adds the removed middleware ids to an exclude list so that they could be skipped from + * the execution chain in case the middleware is registered in a parent group. + * + * Anonymous middlewares are considered non-removable, aka. this method + * does nothing if the middleware id is an empty string. + */ + unbind(...middlewareIds: string[]): (Route) + } } namespace slog { @@ -23039,6 +23775,24 @@ namespace slog { } } +namespace cobra { + // @ts-ignore + import flag = pflag + /** + * ShellCompDirective is a bit map representing the different behaviors the shell + * can be instructed to have once completions have been provided. + */ + interface ShellCompDirective extends Number{} +} + +namespace oauth2 { +} + +namespace router { + // @ts-ignore + import validation = ozzo_validation +} + namespace slog { // @ts-ignore import loginternal = internal diff --git a/PocketBase/pocketbase.exe b/PocketBase/pocketbase.exe index ca59a46..9b9c507 100644 Binary files a/PocketBase/pocketbase.exe and b/PocketBase/pocketbase.exe differ diff --git a/PocketBaseSharp .Tests/PocketBaseSharp.Tests.csproj b/PocketBaseSharp .Tests/PocketBaseSharp.Tests.csproj index 6276a94..e813575 100644 --- a/PocketBaseSharp .Tests/PocketBaseSharp.Tests.csproj +++ b/PocketBaseSharp .Tests/PocketBaseSharp.Tests.csproj @@ -1,7 +1,7 @@ - + - net9.0 + net10.0 PocketBaseSharp.Tests enable enable @@ -10,11 +10,11 @@ - - + + - - + + all runtime; build; native; contentfiles; analyzers; buildtransitive diff --git a/PocketBaseSharp.sln b/PocketBaseSharp.sln index b2af8e4..572fa7a 100644 --- a/PocketBaseSharp.sln +++ b/PocketBaseSharp.sln @@ -1,7 +1,7 @@  Microsoft Visual Studio Solution File, Format Version 12.00 -# Visual Studio Version 17 -VisualStudioVersion = 17.4.32912.340 +# Visual Studio Version 18 +VisualStudioVersion = 18.0.11205.157 d18.0 MinimumVisualStudioVersion = 10.0.40219.1 Project("{2150E333-8FDC-42A3-9474-1A3956D46DE8}") = "Licenses", "Licenses", "{18252485-A231-4ADF-9266-E5F88BC09B96}" ProjectSection(SolutionItems) = preProject diff --git a/PocketBaseSharp/PocketBaseSharp.csproj b/PocketBaseSharp/PocketBaseSharp.csproj index 74dc7e5..d2c9d02 100644 --- a/PocketBaseSharp/PocketBaseSharp.csproj +++ b/PocketBaseSharp/PocketBaseSharp.csproj @@ -17,7 +17,7 @@ snupkg LICENSE.txt True - net9.0 + net10.0 latest diff --git a/README.md b/README.md index 1e40edd..d721679 100644 --- a/README.md +++ b/README.md @@ -1,4 +1,3 @@ -
PBSharp PocketBase @@ -19,7 +18,7 @@ Community-developed, open-source C# SDK for [PocketBase](https://pocketbase.io/) - Real-time subscriptions - Batch operations (create/update/delete) **(NEW)** - File uploads/downloads -- Blazor & .NET 9 compatible **(NEW)** +- Blazor & .NET 10 compatible **(NEW)** - Mudblazor demo Blazor WASM app **(NEW)** - Pocketbase v0.28.4 **(NEW)** @@ -85,7 +84,7 @@ Note: Each CRUD action requires a data type which inherits from the base class ' ## Development ### Requirements -- .NET 9 +- .NET 10 ## Contributing