Datly has been design as modern flexible ORM for rapid development. Datly can operate in managed , autonomous and custom mode. In managed mode datly is used as regular GoLang ORM where you operate on golang struct and datly reader or executor service programmatically.
In autonomous mode datly uses a dql based rules with single gateway entry point handling all incoming request matching defined rules.
In custom mode datly also operates as single gateway entry point handling all incoming request, allowing method/receiver go struct behaviour customization associated with the rule, this is achieved by custom data type registry with customize
Both autonomous and custom mode datly can be deployed as standalone app or as Docker, Kubernetes,or as Cloud Serverless runtimes (lambda,GCF,Cloud Run).
Reader service allows reading and transforming data from various database vendors at once, datly is responsible for assembling final data view. Datly can dynamically generate SQL Query with velty template language based on input parameters. In addition, view data can be dynamically adjusted with pagination, field/column project or criteria selection or event case formatting all controlled by a client. On top of that any individual view data can be cached or pre-cache on the SQL based level, improving substantially service response time and reduce data access cost.
Executor service is used to validate, transform and modify data in database programmatically. Post (insert), Put(update), Patch(insert/update) operation are supported.
Executor service use velty temple engine to operates on GoLang struct.
You can use datly gen
command to generate initial dql with corresponding go struct(s)
for single or multi relation data mutation. Datly uses transaction to modify data in database.
For patch/update operation datly support input state distinction, with Has marker, allowing handling user input effectively, ensuring data integrity, and improving the security of applications. This approach simplify input validation and tracking actual changes supplied by a client.
[RouteConfig]
import(
... // go struct import
)
#set( $_ = ...) //Parameter declaration
SELECT mainViewAlias.* [EXCEPT COLUMN]
[, secondViewAlias.* ]
[, NviewAlias.* ]
[, DQL configuration function ]
FROM (
SELECT
ID,
...,
other_column
FROM table1
) mainViewAlias,
[
JOIN (
SELECT OTHER_ID,
...,
other_column
FROM table2
) secondViewAlias ON mainViewAlias.ID = secondViewAlias.OTHER_ID [ AND 1=1]
]
Datly can infer parameters from DQL, but explicit parameter declarations are recommended.
Datly parameters define the component contract and data flow. They are converted into input, output, and asynchronous Go structs.
If the destination of a contract is not explicitly defined, the parameter specifies the component input.
Syntax Example:
#set( $_ = $PARAMETER_NAME<InputType[,OptionalOutputCodecType]>(KIND/LOCATION)[.Options]) //
Input type supports Golang syntax, e.g.,
#set( $_ = $AllocationByID<map[int]int>(transient/))
This indicates the source of the parameter. The following kinds are supported:
- query: Access HTTP request query string.
- path: Access HTTP request matched URI parameters.
- body: Access HTTP request body derived type.
- const: Access defined constants.
- env: Access defined environment variables.
- param: Access other defined parameters.
- component: Access other Datly components.
- state: Access fields from defined state.
- object: Access composite struct types.
- repeated: Access composite slice types.
- transient: Access transient data.
- output: Access view response, status, performance metrics, view summary.
- Output(): Define output parameter.
- Async(): Define asynchronous parameters such as UserId, UserEmail, JobMatchKey.
- WithTag('tag'): Define a parameter tag.
- WithCodec('name' [,args...]): Define input transformer codec.
- WithPredicate('name', group [,args...]).
- WithHandler('name' [,args...]).
- Value(value): Define a default value.
- Required(): Set the required flag.
- Optional(): Set the optional flag.
- Of('parent'): Define composite parent holder/owner.
- WithStatusCode(code): Define error status code.
- Cacheable(): Set cacheable flag (true by default).
- QuerySelector(viewName): Defines source a view parameter selector such as Fields, Criteria, OrderBy, Limit, Offset.
- Scope('scope name'): Define parameter scope. Control state population by scope. Used by handler Stater().Into()
- When: Conditional parameter declaration.
- tag: Defines column-derived go struct field tag.
- cast: Type cast.
- use_connector: Sets connector reference.
- use_cache: Sets cache reference.
- set_limit: Sets default query limit.
- order_by: Sets default order by clause.
- cardinality: Sets view cardinality.
- allow_nulls: Allows nulls in output.
- match_strategy: Sets relation fetch match strategy (options: read_all, read_matched).
See function registry
- RouteConfig is JSON representation of Route settings i.e {"URI":"app1/view1/{Id}"}
- OutputConfig is JSON representation of Output settings i.e {"Style":"Comprehensive"}
- ColumnConfig is JSON representation of Column settings i.e {"DataType":"bool"}
- ViewConfig is JSON representation of View settings i.e {"Cache":{"Ref":"aerospike"}}
Datly uses specific dialect of SQL to define rule for view(s) and relation between them.
DSQL is transformed into datly internal view representation with the following command:
datly translate -c='myDB|driver|dsn|secretURL|secretKey' -s=myRule.sql -d=autogen
where -d would persist rule with datly config to specific myProjectLocation
Once datly rules are stored, you can start datly with
datly run -c=myProjectLocation/Datly/config.json
In manage mode you use directly reader.Service, with provided view and underlying go struct.
package mypkg
type Invoice struct {
Id int32 `sqlx:"id"`
CustomerName *string `sqlx:"customer_name"`
InvoiceDate *time.Time `sqlx:"invoice_date"`
DueDate *time.Time `sqlx:"due_date"`
TotalAmount *string `sqlx:"total_amount"`
Items []*Item
}
type Item struct {
Id int32 `sqlx:"id"`
InvoiceId *int64 `sqlx:"invoice_id"`
ProductName *string `sqlx:"product_name"`
Quantity *int64 `sqlx:"quantity"`
Price *string `sqlx:"price"`
Total *string `sqlx:"total"`
}
func ExampleService_ReadDataView() {
aReader := reader.New()
conn := aReader.Resource.AddConnector("dbName", "database/sql driverName", "database/sql dsn")
invoiceView := view.NewView("invoice", "INVOICE",
view.WithConnector(conn),
view.WithCriteria("id"),
view.WithViewType(reflect.TypeOf(&Invoice{})),
view.WithOneToMany("Items", "id",
view.NwReferenceView("", "invoice_id",
view.NewView("items", "invoice_list_item", view.WithConnector(conn)))),
)
aReader.Resource.AddViews(invoiceView)
if err := aReader.Resource.Init(context.Background()); err != nil {
log.Fatal(err)
}
var invoices= make([]*Invoice, 0)
if err := aReader.ReadInto(context.Background(), "invoice", &invoices, reader.WithCriteria( "status = ?",1));err != nil {
log.Fatal(err)
}
invociesJSON, _:=json.Marshal(invoices)
fmt.Printf("invocies: %s\n", invociesJSON)
}
See Reader Service for more details
See e2e testcase for more examples
rule.sql
SELECT
dept.*
employee.*
FROM DEPARMENT dept
JOIN EMP employee ON dept.ID = employee.DEPT_ID
datly -N=dept -X=rule.sql -C='mydb|mydb_driver|mydb_driver_dsn'
rule.sql
SELECT
dept.*
employee.*,
organization.*
FROM DEPARMENT dept
JOIN EMP employee ON dept.ID = employee.DEPT_ID
JOIN ORG organization ON organization.ID = demp.ORG_ID AND 1=1
datly -N=dept -X=rule.sql -C='mydb|mydb_driver|mydb_driver_dsn'
rule.sql
SELECT
dept.* EXCEPT ORG_ID
employee.* EXCEPT DEPT_ID,
organization.*
FROM DEPARMENT dept
JOIN EMP employee ON dept.ID = employee.DEPT_ID
JOIN ORG organization ON organization.ID = demp.ORG_ID AND 1=1
datly -N=dept -X=rule.sql -C='mydb|mydb_driver|mydb_driver_dsn'
rule.sql
SELECT
dept.* EXCEPT ORG_ID
employee.* EXCEPT DEPT_ID,
organization.*
FROM (SELECT * FROM DEPARMENT t) dept
JOIN (SELECT ID, NAME, DEPT_ID FROM EMP t) employee ON dept.ID = employee.DEPT_ID
JOIN ORG organization ON organization.ID = demp.ORG_ID AND 1=1
datly -N=dept -X=rule.sql -C='mydb|mydb_driver|mydb_driver_dsn'
SELECT
dept.* EXCEPT ORG_ID
employee.* EXCEPT DEPT_ID,
organization.*
FROM (SELECT * FROM DEPARMENT t) dept
JOIN (SELECT ID, NAME, DEPT_ID FROM EMP t) employee ON dept.ID = employee.DEPT_ID
JOIN ORG organization ON organization.ID = demp.ORG_ID AND 1=1
WHERE 1=1
#if ($Has.Id)
AND ID = $Id
#end
SELECT
dept.* EXCEPT ORG_ID
employee.* EXCEPT DEPT_ID,
organization.*
FROM (SELECT * FROM DEPARMENT t) dept
JOIN (SELECT ID, NAME, DEPT_ID FROM EMP t) employee ON dept.ID = employee.DEPT_ID
JOIN ORG organization ON organization.ID = demp.ORG_ID AND 1=1
WHERE ID = $Id
SELECT
dept.* EXCEPT ORG_ID
employee.* EXCEPT DEPT_ID,
organization.*
FROM (SELECT * FROM DEPARMENT t) dept
JOIN (SELECT ID, NAME, DEPT_ID,
(CASE WHEN COLUMN_X = 1 THEN
'x1,x2'
WHEN COLUMN_X = 2 THEN
'x3,x4'
END) AS SLICE /* {"Codec":{"Ref":"AsStrings"}, "DataType": "string"} */
FROM EMP t) employee ON dept.ID = employee.DEPT_ID
JOIN ORG organization ON organization.ID = demp.ORG_ID AND 1=1
WHERE ID = $Id
- AsStrings: converts coma separated value into []string
Datly default data assembly method use IN operation join with parent view data.
SELECT vendor.*,
products.* EXCEPT VENDOR_ID
FROM (SELECT * FROM VENDOR t ) vendor
JOIN (
SELECT * FROM (
SELECT ID, NAME, VENDOR_ID FROM PRODUCT t
UNION ALL
SELECT ID, NAME, VENDOR_ID FROM PRODUCT_ARCHIVE t
) t
) products WHERE products.VENDOR_ID = vendor.ID
in the following scenario datly is unable to adjust product SQL with WHERE products.VENDOR_ID IN(?,..,?) due to its complexity, and would filter products data only after reading all UNION-ed data. To address this potential data fetch performance issue you can use the following expression $View.ParentJoinOn("AND","VENDOR_ID")
SELECT vendor.*,
products.* EXCEPT VENDOR_ID
FROM (SELECT * FROM VENDOR t ) vendor
JOIN (
SELECT * FROM (
SELECT ID, NAME, VENDOR_ID FROM PRODUCT t WHERE 1 = 1 $View.ParentJoinOn("AND","VENDOR_ID")
UNION ALL
SELECT ID, NAME, VENDOR_ID FROM PRODUCT_ARCHIVE t
WHERE 1 = 1 $View.ParentJoinOn("AND","VENDOR_ID")
) t
) products WHERE products.VENDOR_ID = vendor.ID
/* {"URI":"dept/"} */
SELECT
dept.* EXCEPT ORG_ID
employee.* EXCEPT DEPT_ID
FROM (SELECT * FROM DEPARMENT t) dept
JOIN (SELECT ID, NAME, DEPT_ID FROM EMP t) employee
ON dept.ID = employee.DEPT_ID
/* {"URI":"dept/",
"Cache":{
"Name": "aerospike",
"Provider": "aerospike://127.0.0.1:3000/test",
"Location": "${view.Name}",
"TimeToLiveMs": 360000
}
} */
SELECT
dept.* EXCEPT ORG_ID
employee.* EXCEPT DEPT_ID
FROM (SELECT * FROM DEPARMENT t) dept /* {"Cache":{"Ref":"aerospike"}} */
JOIN (SELECT ID, NAME, DEPT_ID FROM EMP t) employee /* {"Cache":{"Ref":"aerospike"}} */
ON dept.ID = employee.DEPT_ID
SELECT
dept.* EXCEPT ORG_ID
employee.* EXCEPT DEPT_ID
FROM (SELECT * FROM DEPARMENT t) dept /* {"Selector":{"Limit": 40, "Constraints"{"Criteria": false}}} */
JOIN (SELECT ID, NAME, DEPT_ID FROM EMP t) employee /* {"Selector":{"Limit": 80, "Constraints"{"Criteria": false, "Limit": false, "Offset": false}}} */
ON dept.ID = employee.DEPT_ID
Executor service is used to validate, transform and modify data in database programmatically. Post (insert), Put(update), Patch(insert/update) are supported.
Executor DSQL uses the following structure
/* ROUTE OPTION */
import ...
#set( $_ = ...) //input paramter initialization
DML | velocity expr (#set|#if|#foreach)
Where
- View Parameter Hints defines SQL based data view parameter
#set($_ = $Records /*
SELECT * FROM MY_TABLE /* {"Selector":{}} */ WHERE ID = $Entity.ID
*/)
#set($_ = $PARAM_NAME<PARAM_TYPE>(PARAM_KIND/SOURCE) /*
optional SQL hint
*/)
To generate initial executor DQL, use datly gen
with reader dql defining
one or multi view with corresponding relations with additional input hints
All DML operation are executed in the one transaction, any errors trigger either by database or programmatically ($logger.Fatalf) cause transaction rollback.
Executor dql rule can be generated from regular reader dql
- simple object ({})
SELECT myTable.* /* { "Cardinality": "One" } */
FROM (SELECT * FROM MY_TABLE) myTable
- simple namespaced object i.e. {"Data": {}}
SELECT myTable.* /* { "Cardinality": "One", "Field":"Data" } */
FROM (SELECT * FROM MY_TABLE) myTable
- array objects ([])
SELECT myTable.* /* { "Cardinality": "Many" } */
FROM (SELECT * FROM MY_TABLE) myTable
- array namespace objects {"Data": [{}]}
SELECT myTable.* /* { "Cardinality": "Many" , "Field":"Data"} */
FROM (SELECT * FROM MY_TABLE) myTable
- nested relation
SELECT
dept.* /* { "Cardinality": "One", "Field":"Data" } */,
employee.*,
organization.*
FROM (SELECT * FROM DEPARMENT) dept
JOIN (SELECT * FROM EMP) employee ON dept.ID = employee.DEPT_ID
JOIN (SELECT * FROM ORG) organization ON organization.ID = demp.ORG_ID AND 1=1
datly gen -h
datly gen -o=patch|post|put|delete -s=myRule.sql -c='myDB|driver|dsn[|secretURL|secretKey]' -p=$myProjectLocation
As a result the following file would be generated:
- dql/.sql - initial logic for patch|post|put|delete operations
- dql/Post.json - example of JSON for testing a service
- pkg/.go - initial go struct(s)
Generated go struct(s) can be modified with additional tags.
Datly uses 'validate' and 'sqlx' tags to control input validation.
Datly generate basic tempalte with the following parameters expressions
- #set($_ = $Entity<*Entity>(body/)) for simple object ({})
- #set($_ = $Entities<[]*Entity>(body/)) for simple array ([])
- #set($_ = $Entity<*Entity>(body/data)) for namespaced object ({"data":{}})
- #set($_ = $Entities<[]*Entity>(body/data)) for namespaced array ({"data":[]})
After adjusting logic in executor dql,
datly translate -c='myDB|driver|dsn' -s=exeuctor_dql_rule.sql -p=$myProjectLocation
For "complex" validation logic it's recommend to use datly in custom mode where all custom logic is implemented/unit tested in pure go and datly intermediates in data retrieval and actual data modification.
- $logger.FatalF
- $logger.LogF
- $logger.PrintF
- $fmt.Sprintf
- $sequencer.Allocate(tableName string, dest interface{}, selector string)
- $sqlx.Validate
Universal message bus, provide ability to send/publish asyn message a message bus (i.e sqs/sns,pubsub,kafka)
- $messageBus.Message creates a message
- $messageBus.Push push a message
- #set($msg = $messageBus.Message("aws/topic/us-west-1/mytopic", $data))
#set($confirmation = $messageBus.Push($msg))
$logger.Printf("confirmation:%v", $confirmation.MessageID)
- $sqlx.Validate - validates a struct with sqlx tags
- $validator.Validate - validates struct with validate tag
- $http.Do
- $http.Get
- $response.Failf
- $response.FailfWithStatusCode
- $response.StatusCode
- $differ.Diff
TODO add all supported and update/add example
Any database constraint validation can be customized with sqlx validator service
#set($validation = $sqlx.Validate($Entity))
#if($validation.Failed)
$logger.Fatal($validation)
#end
with service
$sequencer.Allocate("MY_TABLE", $Entity, "Id")
#if($Unsafe.Entity)
$sql.Insert($Entity, "MY_TABLE");
#end
with DML
$sequencer.Allocate("MyTable", $Entity, "Id")
INSERT INTO MY_TABLE(ID, NAME) VALUES($Entity.Id, $Entity.Name)
$sequencer.Allocate("MyTable", $Entity, "Id")
#if($Unsafe.Entity)
$sql.Update($Entity, "MyTable");
#end
with DML
UPDATE MY_TABLE SET
NAME = $Entity.Name
#if($Entity.Has.Description)
, DESCRIPTION = $Entity.Description
#end
WHERE ID = $Entity.Id
/* {"Method":"PATCH","ResponseBody":{"From":"Product"}} */
import (
"product.Product"
)
#set($_ = $Jwt<string>(Header/Authorization).WithCodec(JwtClaim).WithStatusCode(401))
#set($_ = $Campaign<*[]Product>(body/Entity))
/* {"Method":"PATCH","ResponseBody":{"From":"Product"}} */
import (
"./product.Product"
)
#set($_ = $Jwt<string>(Header/Authorization).WithCodec(JwtClaim).WithStatusCode(401))
#set($_ = $Campaign<*[]Product>(body/Entity))
#set($validation = $New("*Validation"))
#set($hasError = $Product.Init($validation))
....
#set($hasError = $Product.Validate($validation))
By default, all parameters are required, adding '?' character before SELECT keyword would make parameter optional. or !ErroCode for required with error code.
Note that all example below use '#set( $_ = ...)' syntax which defines datly parameters, where all these parameters are resolved before template code runs.
Data view parameters use regular reader DSQL and can return one or more records.
#set($_ = $Records /* !401
SELECT * FROM MY_TABLE WHERE ID = $Entity.ID
*/)
#set($_ = $Records /* ?
SELECT * FROM MY_TABLE WHERE ID = $Entity.ID
*/)
In addition, records can be fetched to imported struct
import (
"./product.Product"
)
...
#set($_ = $Records<[]*Product>(data_view/Product) /*
? SELECT * FROM Producct WHERE STATUS = $status
*/)
Datly parameter can be also of 'param' kind to transform any other existing parameter with structQL.
import (
"./product.Product"
)
#set($_ = $Products<*[]Product>(body/Data))
#set($_ = $ProductsIds<?>(param/Products) /* ?
SELECT ARRAY_AGG(Id) AS Values FROM `/`
*/)
#set($_ = $prevProducts<*[]Product>()/*
SELECT * FROM Products WHERE $criteria.In("ID", $ProductsIds.Values)
*/)
In the example above in the first step collection of products is defined from POST body data field. Second parameter extract all products ID with structql, in the final prevProducts fetches all produces where ID is listed in ProductsIds parameters. Note that we use $criteria.In function to automatically generate IN statement if parameter len is greater than zero otherwise the $criteria.In function returns false, to ensure correct SQL generation and expected behaviours
Any go collection can be index with IndexBy dql method
#set($_ = $Records /*
SELECT * FROM MY_TABLE
*/)
#set($ById = $Records.IndexBy("Id"))
#foreach($rec in $Unsafe.$Entities)
#if($ById.HasKey($rec.Id) == false)
$logger.Fatal("not found record with %v id", $rec.Id)
#end
#set($prev = $ById[$rec.Id])
#end
#set($_ = $Jwt<string>(Header/Authorization).WithCodec(JwtClaim).WithStatusCode(401))
#set($_ = $Authorization /*
!401 SELECT Authorized /* {"DataType":"bool"} */
FROM (SELECT IS_VENDOR_AUTHORIZED($Jwt.UserID, $vendorID) AS Authorized) t
WHERE Authorized
*/)
#set($_ = $Records /* {"Required":false}
#set($Ids = $Entities.QueryFirst("SELECT ARRAY_AGG(Id) AS Vals FROM `/`"))
SELECT * FROM MY_TABLE /* {"Selector":{}} */
WHERE #if($Ids.Vals.Length() > 0 ) ID IN ( $Ids.Vals ) #else 1 = 0 #end */
)
#set($ById = $Records.IndexBy("Id"))
#foreach($rec in $Unsafe.$Entities)
#if($ById.HasKey($rec) == false)
$logger.Fatal("not found record with %v id", $rec.Id)
#end
#set($prev = $ById[$rec.Id])
#set($recDiff = $differ.Diff($prev, $rec))
#if($fooDif.Changed())
INSERT INTO DIFF_JN(DIFF) VALUES ($recDiff.String());
#end
#end
Datly is runtime agnostic and can be deployed as standalone app, or AWS Lambda, Google cloud function, Google Cloud run. Entry point with deployment example are define under Runtime
Datly deployment is entails datly binary deployment with initial rule set, follow by just rule synchronization.
On both autonomous and custom mode 'datly' uses set of rule, and plugins. On cloud deployment these assets are stored on cloud storage, thus to reduce cold start or rule changes detection and reload it's recommend to set flag "UseCacheFS" in the datly config. This setting instructs daytly to use datly.pkg.gz cache file, for all underlying assets. Cache is created every time a cache file is deleted from a file storage.
While building cache file with hundreds rules and assets cache file provides both cost and performance optimization on cloud storage, to prepackage datly rule ahead of time run the following command:
datly -P DATLY_ROOT_CONFIG -R CLOUD_STORAGE_DATLY_CONFIG_URL
i.e datly -P /opt/ws/Datly -R s3://myog-serverless-config/Datly
The above command creates datly.pkg.gz file containing all assets from DATLY_ROOT_CONFIG location, where each asset URL is rewritten with CLOUD_STORAGE_DATLY_CONFIG_URL
The following layout organizes datly specific resources
ProjectRoot
| - dql
| - business Unit 1 (appName)
| - entity_X_get.sql
| - entity_X_put.sql
| - entity_X_post.sql
| - entity_X_patch.sql
....
| - entity_N_get.sql
| - routerY.rt
| - entity_N
- other_asset.xsd
| - business Unit N (appName)
| - entityM_get.sql
...
| - routerY.rt
- e2e (end to end testing workflows)
- pkg
| - mypackage1(business Unit 1)
| | - entityX.go
| - mypackageN(business Unit Y)
| - ...
- deployment
- prod
| - Datly
| - dependencies
| - plugins
| - routes
| config.json
- stage
| - Datly
| - dependencies
| - plugins
| - routes
| config.json
To build standalone binary:
git clone https://github.com/viant/datly.git
cd datly/cmd/datly
go build
datly -h
To build datly for Docker or cloud specificRuntimes check deploy.yaml endly deployment workflows.
package doc
import (
"context"
"encoding/json"
"fmt"
"github.com/viant/datly"
"github.com/viant/datly/repository"
"github.com/viant/datly/view"
"github.com/viant/scy/auth/jwt"
"io"
"log"
"net/http"
"reflect"
"strings"
)
type Product struct {
Id int
Name string
VendorId int
}
func (p *Product) OnFetch(ctx context.Context) error {
fmt.Println("breakpoint here")
return nil
}
func (p *Product) Init() {
fmt.Println("breakpoint here")
}
func (p *Product) Validate() bool {
fmt.Println("breakpoint here")
return true
}
type Validation struct {
IsValid bool
}
// Example_ComponentDebugging show how to programmatically execute executor rule
func Example_ComponentDebugging() {
//Uncomment various additional debugging and troubleshuting
// expand.SetPanicOnError(false)
// read.ShowSQL(true)
// update.ShowSQL(true)
// insert.ShowSQL(true)
ctx := context.Background()
service, _ := datly.New(context.Background())
ruleURL := "yyyyyyy/Datly/routes/dev/product.yaml"
components, err := service.LoadComponents(ctx, ruleURL, repository.WithPackageTypes(
view.NewPackagedType("domain", "Product", reflect.TypeOf(Product{})),
view.NewPackagedType("domain", "Validation", reflect.TypeOf(Validation{}))),
)
if err != nil {
log.Fatal(err)
}
httpRequest, err := http.NewRequest(http.MethodPut, "http://127.0.0.1:8080/v1/api/dev", io.NopCloser(strings.NewReader(`{"Name":"IPad"}`)))
if err != nil {
log.Fatal(err)
}
err = service.SignRequest(httpRequest, &jwt.Claims{
Email: "dev@viantinc.com",
UserID: 111,
})
if err != nil {
log.Fatal(err)
}
aComponent := components.Components[0]
aSession := service.NewComponentSession(aComponent, httpRequest)
response, err := service.Operate(ctx, aComponent, aSession)
if err != nil {
log.Fatal(err)
}
data, _ := json.Marshal(response)
fmt.Printf("%T, %s\n", response, data)
}
Datly is purely written and go, and thus it's possible to take any rule and load it and run it as if it was defined in the managed mode, you can set breakpoint to any method call from template. In addition, you can implement one of the following to be invoked on actual insert or update.
type Insertable interface {
OnInsert(ctx context.Context) error
}
type Updatable interface {
OnUpdate(ctx context.Context) error
}
See Component debugging section
To debug reader, add go struct import statement at the top of the rule, you can get struct definition from
open http://127.0.0.1:8080/v1/api/meta/struct/dev/products
- product.yaml
/* {"URI":"dev/products"} */
import (
"product.Product"
)
SELECT product.*
FROM (SELECT * FROM PRODUCT) product /* {"DataType":"*Product"} */
You can define of one to following for setting debugger breakpoint:
- OnFetch(ctx context.Context) error: invoked by reader once record is fetched from database
- OnRelation(ctx context.Context): invoked by reader once all relations are assembled
See Component debugging section