-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathindex.json
1 lines (1 loc) · 50 KB
/
index.json
1
[{"content":"Creating Tekton tasks is real fun! Its similarities to pod containers, and how it works make developing and testing super easy. Still, sometimes it is necessary to reuse steps in multiple tasks, like when using reviewdog.\nIn this post, we will learn how to do this very easily using kustomize, and the repo danielfbm/tekton-tasks-kustomize has a ready-to-use example.\nLet\u0026rsquo;s say we want two different tasks golang-test and golangci-lint need to add a reviewdog-report step as seen below:\nThe most obvious way would be copying and pasting the steps, but for more complex scenarios, where there are n tasks, this becomes error-prone. Using a template engine like helm could help, but learning another templating engine, plus having to change the contents of said tasks also becomes a burden. Instead, kustomize has a set of tools to make this job easier, while enjoying reutilizing tasks from the tektoncd/catalog.\nFolder structure ├── overlays └── tasks tasks will host your tasks files; overlays will host your patches, like adding a shared step; Tasks Prepare your tasks, for this example, we will use golang-test and golangci-lint, please adjust accordingly.\nSnippet With the tasks ready, it is time to prepare a snippet, which should consist of all the necessary additions, like params, workspaces, results, steps, etc:\nspec: params: # since we are splitting the steps, the previous step needs to save the output to a file - name: report-file default: reportfile description: Report file with errors # format of the report file - name: format default: golint description: Format of error input from the task # reviewdog supports many reporter types - name: reporter default: local description: Reporter type for reviewdog https://github.com/reviewdog/reviewdog#reporters # reviewdog needs a diff for precise pull request comments - name: diff default: git diff FETCH_HEAD description: Diff command https://github.com/reviewdog/reviewdog#reporters workspaces: - name: token description: | Workspace which contains a token file for Github Pull Request comments. Must have a token file with the Github API access token steps: - name: reviewdog-report image: golangci/golangci-lint:v1.31-alpine # both have the same workspace name workingDir: $(workspaces.source.path) script: | #!/bin/sh set -ue wget -O - -q https://raw.githubusercontent.com/reviewdog/reviewdog/master/install.sh | sh -s -- -b $(go env GOPATH)/bin export REVIEWDOG_GITHUB_API_TOKEN=$(cat $(workspaces.token.path)/token) cat $(params.reportfile) | reviewdog -f=$(params.format) -diff=\u0026#34;$(params.diff)\u0026#34; Patch With the above snippet, it is time to create our patch. I\u0026rsquo;ve tried using the snippet directly using as a patch but was not successful, so I decided to create a JSON6902 patch:\n# parameters - op: add path: /spec/params/- value: name: report-file default: reportfile description: Report file with errors - op: add path: /spec/params/- value: name: format default: golint description: Format of error input from the task - op: add path: /spec/params/- value: name: reporter default: local description: Reporter type for reviewdog https://github.com/reviewdog/reviewdog#reporters - op: add path: /spec/params/- value: name: diff default: git diff FETCH_HEAD description: Diff command https://github.com/reviewdog/reviewdog#reporters # workspaces - op: add path: /spec/workspaces/- value: name: token description: | Workspace which contains a token file for Github Pull Request comments. Must have a token file with the Github API access token # steps - op: add path: /spec/steps/- value: name: reviewdog-report image: golangci/golangci-lint:v1.31-alpine # both have the same workspace name workingDir: $(workspaces.source.path) script: | #!/bin/sh set -ue wget -O - -q https://raw.githubusercontent.com/reviewdog/reviewdog/master/install.sh | sh -s -- -b $(go env GOPATH)/bin export REVIEWDOG_GITHUB_API_TOKEN=$(cat $(workspaces.token.path)/token) cat $(params.reportfile) | reviewdog -f=$(params.format) -diff=\u0026#34;$(params.diff)\u0026#34; save as reviewdog-step-patch.yaml and create a kustomization.yaml with the following content:\nbases: - ../tasks patches: - path: ./reviewdog-step-patch.yaml target: kind: Task More patching Make sure all kustomization.yaml files are set up correctly and try running kustomize build overlays. You should see the params, workspaces and params added.\nBut wait! we still need to connect the dots. Without modifying the imported tasks this would still not work.\nFor the golangci-lint task, it is necessary to save the result to a file as given in the parameter $(params.report-file), and change the default format for the $(params.format) parameter to golangci-lint:\ngolangci-lint - op: replace path: /spec/params/11/default value: golangci-lint - op: replace path: /spec/steps/0/script value: | golangci-lint run $(params.flags) \u0026gt; $(params.report-file) Save the file as overlays/golangci-lint-patch.yaml and add it to the overlays/kustomization.yaml.\n[...] - path: ./golangci-lint-patch.yaml target: kind: Task name: golangci-lint golang-test Here is the change for the golang-test task, and changing it into a golint:\n- op: replace path: /spec/steps/0/script value: | if [ ! -e $GOPATH/src/$(params.package)/go.mod ];then SRC_PATH=\u0026#34;$GOPATH/src/$(params.package)\u0026#34; mkdir -p $SRC_PATH cp -R \u0026#34;$(workspaces.source.path)/$(params.context)\u0026#34;/* $SRC_PATH cd $SRC_PATH fi golint $(params.packages) \u0026gt; $(params.report-file) Add the entry to overlays/kustomization.yaml:\n- path: ./golint-patch.yaml target: kind: Task name: golang-test Suffix If you want to preserve the original tasks and only add new tasks, just change the overlays/kustomization.yaml adding a suffix to the files:\nnameSuffix: -review Testing Apply your tasks into your Kubernetes cluster using kubectl apply -k overlays or kustomize build overlays | kubectl apply -f -\nThe resulting file tree should be something like:\n├── overlays │ ├── golangci-lint-patch.yaml │ ├── golint-patch.yaml │ ├── kustomization.yaml │ └── reviewdog-step-patch.yaml └── tasks ├── golang-test.yaml ├── golangci-lint.yaml └── kustomization.yaml Enjoy your new golang-test-review and golangci-lint-review.\n","permalink":"https://danielfbm.github.io/post/kustomize-tekton-task/","summary":"Creating Tekton tasks is real fun! Its similarities to pod containers, and how it works make developing and testing super easy. Still, sometimes it is necessary to reuse steps in multiple tasks, like when using reviewdog.\nIn this post, we will learn how to do this very easily using kustomize, and the repo danielfbm/tekton-tasks-kustomize has a ready-to-use example.\nLet\u0026rsquo;s say we want two different tasks golang-test and golangci-lint need to add a reviewdog-report step as seen below:","title":"How to reuse steps in Tekton tasks with kustomize"},{"content":"While using Sonobuoy as our main test runner engine at Alauda we noticed a small issue and decided to contribute the patch:\nSonobuoy will create Kubernetes RBAC ClusterRole and ClusterRoleBinding but will also destroy together with all the test data. This is ideal, but in the current implementation it will delete it globally affecting other parallel tests permissions, resulting in lots of permission failures. This Pull-request solves this issue by adding a test run namespace label to all Cluster resources, and make these resource names unique according to the namespace.\nTry it out and let us know. Happy testing!\n","permalink":"https://danielfbm.github.io/post/sonobuoy-contribution/","summary":"While using Sonobuoy as our main test runner engine at Alauda we noticed a small issue and decided to contribute the patch:\nSonobuoy will create Kubernetes RBAC ClusterRole and ClusterRoleBinding but will also destroy together with all the test data. This is ideal, but in the current implementation it will delete it globally affecting other parallel tests permissions, resulting in lots of permission failures. This Pull-request solves this issue by adding a test run namespace label to all Cluster resources, and make these resource names unique according to the namespace.","title":"My First Sonobuoy Contribution"},{"content":"Here is the Official website link and the GitHub repo.\nUpdate: version v0.17.0 is available\nWhat is Sonobuoy? According to the website:\nSonobuoy is a diagnostic tool that makes it easier to understand the state of a Kubernetes cluster by running a set of plugins (including Kubernetes conformance tests) in an accessible and non-destructive manner. It is a customizable, extendable, and cluster-agnostic way to generate clear, informative reports about your cluster.\nIn general I see sonobuoy as a Test Framework to be executed inside Kubernetes.\nQuickStart Run a small set of kubernetes conformance tests:\nInstall the CLI from GitHub Releases; Have a kubernetes environment ready, I will be using the Docker for Mac\u0026rsquo;s kubernetes; Run the quick tests command sonobuoy run --wait --mode quick In another Terminal window: kubectl get pods -n sonobuoy -w to watch all pods in the sonobuoy namespace sonobuoy status can also provide some helpful information regarding the run Once it is done:\nresults=$(sonobuoy retrieve) sonobuoy results $results To cleanup: sonobuoy delete --wait\nFor more details on the contents of the result file, please check Snapshot Layout\nCustomizing OK, so sonobuoy provides useful tools to run tests inside Kubernetes and includes its official Conformance Tests, but actually its is a very powerful tool to create very powerful custom tests for our own applications mostly because it offers a functionality called plugins\nPlugins Run sonobuoy gen plugin --show-default-podspec -n hello-world -i hello-world:latest to generate our first sonobuoy plugin. This command will output a yaml file, save it as necessary.\nIn order to try it out, the docs provide very neat examples, the cmd-runner is a great place to start.\nConclusion When looking for a customizable and ready to go tool to run tests using Kubernetes, Sonobuoy is a great option for try.\n","permalink":"https://danielfbm.github.io/post/sonobuoy-0.16.3/","summary":"Here is the Official website link and the GitHub repo.\nUpdate: version v0.17.0 is available\nWhat is Sonobuoy? According to the website:\nSonobuoy is a diagnostic tool that makes it easier to understand the state of a Kubernetes cluster by running a set of plugins (including Kubernetes conformance tests) in an accessible and non-destructive manner. It is a customizable, extendable, and cluster-agnostic way to generate clear, informative reports about your cluster.","title":"Sonobuoy v0.16.3"},{"content":"This is for those interested in the go programming language. This is my take as an introduction on this amazing language\nHonestly there is plenty of resources on the web about golang to fill a lot of content. This post is merely a summary of the things I find most useful and what attracts me the most while programming in go\nTL; DR To install simply download the package for your OS in the golang official download page\nIf you are on macOS and have brew installed: brew update \u0026amp;\u0026amp; brew install golang\nCheck the examples on this git repo to play:\n# using the terminal of your choice navigate to the repo folder go run \u0026lt;filename\u0026gt;.go In general golang is a very simple, modern and pleasant to use programming language. It is worth giving it a shot specially because of its impressive performance.\nDownload and Setup If you don\u0026rsquo;t have it installed head right to the official webpage and download it.\nAfter the setup, try the following command in your favorite terminal\ngo version If everything was setup correctly you will be able to see the installed version and the platform in which you installed it.\nAs opposite to many languages, go uses one main workspace meaning that all your code, and the downloaded third party libraries will cohexist in the same folder structure. This sounds kind of weird in the beginning and most people would prefer to organize their projects in different paths. With time this will prove to become very useful. Besides, there are ways to overcome this limitation, as some would prefer in a enterprise environment.\nTo set a workspace just set the GOPATH environment variable. An example would be to set this environment in a go folder in the user\u0026rsquo;s home folder:\nexport GOPATH=~/go Inside the workspace you can create three folders:\nbin stores all the compiled golang projects src stores all the source code pkg stores your OS related tools If you are using a UNIX like system, just add the export GOPATH command in your .profile or .bash_profile to be loaded as soon as the user is logged in the terminal.\nYou can find more information regarding the Go installation and workspace\nOfficial documentation Post from William Kennedy IDE Although there are many idealists that follow the vim philosophy of coding, I prefer more complex and easier to use IDEs. Golang doesn\u0026rsquo;t have any official IDE and probably never will, mostly because all the necessary tools either come with the go installer, or can be easily installed.\nHere I will explain the one I use and how to set it up for go.\nMicrosoft\u0026rsquo;s Visual Studio Code This lightweight IDE works perfectly for me and I have used it for several months. I really enjoy how you can integrate with the golang standard tooling. You can download it here.\nBy itself this IDE\u0026rsquo;s features are not very impressive, but the main basics for any IDE. Its power comes from its Extensions. Besides browsing and searching for extensions from inside the IDE (Shift + CMD + X on macOS), VS Code\u0026rsquo;s Extension Marketplace website have as well the same feature.\nTo support go we will first install the vscode-go extension. Its source code is hosted on this GitHub repo. After installation it will require installation of multiple go tools. Some of them will require a VPN if you are in a restricted access network:\ngocode: go get -u -v github.com/nsf/gocode godef: go get -u -v github.com/rogpeppe/godef gogetdoc: go get -u -v github.com/zmb3/gogetdoc golint: go get -u -v github.com/golang/lint/golint go-outline: go get -u -v github.com/lukehoban/go-outline goreturns: go get -u -v sourcegraph.com/sqs/goreturns gorename: go get -u -v golang.org/x/tools/cmd/gorename gopkgs: go get -u -v github.com/tpng/gopkgs go-symbols: go get -u -v github.com/newhook/go-symbols guru: go get -u -v golang.org/x/tools/cmd/guru gotests: go get -u -v github.com/cweill/gotests/... With all the tools installed your VS Code will be ready. By default a few features will be enabled:\nAutocomplete Once you download some library, or even using the standard library, the code will start to display auto-complete options as the code is typed.\nAuto format Go has a very comprehensive, and sometimes hated, code writting rules. There is where gofmt places an important rule. In this Extension the gofmt will be automatically executed everytime a file is saved. By default it will:\nRemove unused imported libraries; Spacing code formatting (replacing spaces for tabs when necessary); Removing unecessary blank lines; Auto-build and auto-lint Together with formating the Extension also provides auto-building and linting on save, this makes very quick and simple to see all the possible errors directly inside the IDE and simplifies the coding experience.\nThere are more functionalities supported by this Extension. Visit its code repo for more info.\nSyntax For those familiar with the C family of programming languages will feel very comfortable writting go code. First, most of it is very similar, and those annoying details, like adding a semicolon to the end of every code line, are different. Lets go directly to the point. Nothing better than a Hello World for a quick glance:\npackage main // a package name is expected in every go file as the first line import \u0026#34;fmt\u0026#34; // importing the fmt standard library // for the main package a main function is necessary // just like C, this is the entry point for our program func main() { // Print one line to stdout fmt.Println(\u0026#34;Hello, world\u0026#34;) } Basics There are a few keypoints that, in short, can define the go language:\nCompiled language Strongly typed Opiniated language No Classes No Generics Concurrency from design Variables Below are listed the basic variable types:\nbool string // int is an alias to int32 in 32 bit systems, and int64 for 64 bit int int8 int16 int32 int64 uint uint8 uint16 uint32 uint64 uintptr // uint are unsigned integers byte // alias for uint8 rune // alias for int32 // represents a Unicode code point float32 float64 complex64 complex128 To create custom types just declare a new type as:\ntype MyType int // Declare a list of valid values as constants const ( FirstValue MyType = 1 SecondValue MyType = 2 LastValue MyType = 99 ) There a multiple ways to declare variables in go:\npackage main import ( \u0026#34;fmt\u0026#34; ) func main() { // single variable. Variable type comes after the name var i int = 0 // multiple variables, single line, j == 1, k == 2 var j, k int = 1, 2 // the same as the line above but with implicit type // note the colon z, x, c := 3, 4, 5 // multi-line declaration var ( one string two string three = \u0026#34;three\u0026#34; ) } By default each simple type has a \u0026ldquo;zero-value\u0026rdquo;, which are:\n0 for numeric types \u0026quot;\u0026quot; for string false for bool nil for any pointer As all the code is written inside packages, the first letter of a variable name will define it to be public or private:\npackage some_package var PublicVariable string var privateVariable string This concept can be confusing at the beginning, but it is very useful to avoid extra typing. Global public variables can be accessed by other packages using the following syntax:\npackage other_package import \u0026#34;some_package\u0026#34; import \u0026#34;fmt\u0026#34; func SomeFunction() { fmt.Println(some_package.PublicVariable) } Another aspect very specific to go is that the compiler will not compile code that declares unused variables. Although this feature can seem discomfortable for some, in general it makes the code overall clear.\nFunctions Public and private functions follow the same rules to variable naming for public and private access\n// this function can be accessed by other packages func PublicFunction (param string, other int) { } func privateFunc() { } Go also supports returning multiple values, and each return value type should be declared as well\nfunc SingleReturn() bool { return false } func MultipleReturn() (bool, string) { return true, \u0026#34;hello\u0026#34; } // it is also possible to declare return type as variables without extra declaration func MultipleReturnExplicit() (correct bool, name string) { correct = true name = \u0026#34;a name\u0026#34; return correct, name } Structs Besides the basic types struct is another very useful type declaration that can be used to compose structure abstractions.\n// declaring the struct type MyStruct struct { PublicVariable string privateVariable string } func Usage() { // constructing a variable of type MyStruct myStruct := MyStruct{ PublicVariable: \u0026#34;public\u0026#34;, privateVariable: \u0026#34;private\u0026#34;, } fmt.Println(myStruct.PublicVariable) } Structs can also have functions\n// my here is a variable that carries the values of the struct instance func (my MyStruct) MyMethod() string { return my.PublicVariable } In the example above the my variable is passed to the function as a copy. Using a pointer declaration will enable to change the instance\u0026rsquo;s values\nfunc (my *MyStruct) UpdatePrivate(privateValue string) { my.privateVariable = privateValue } Next For a simple introduction this post is already long. Next on we will learn more about the following topics\nPointers Collections Flow-control Defer Error handling Concurrency, the cool stuff Further reading For a more complete understanding of the go language, syntax, and usage please refer to the following sources:\nEffective go for tips on writing clear, idiomatic Go code. An introduction to programming in go for a free online e-book. A full list of books about go For the example codes in this post please refer to this code repo.\n","permalink":"https://danielfbm.github.io/post/golang-intro/","summary":"This is for those interested in the go programming language. This is my take as an introduction on this amazing language\nHonestly there is plenty of resources on the web about golang to fill a lot of content. This post is merely a summary of the things I find most useful and what attracts me the most while programming in go\nTL; DR To install simply download the package for your OS in the golang official download page","title":"Golang introduction"},{"content":"In this post we will explore some basic concepts while working with a microservices architecture in a distributed application. If you are not very familiar with microservices and the motivation behind it, you should take a look at this series of posts from NGINX that gives a great introduction on the subject.\nAll the concepts are quite simple and although the autor had sugested a few things to get started I will take you through the construction of a simple microservices app that you could deploy anywhere.\nWarning This is a simple introduction and will help you get started with the microservices architecture, and hopefully , with Docker. This series of posts are focused towards beginners and enthusiasts\nArchitecture In this simple app we will split it in three simple parts:\nClient - WebUI that will be responsible to create an interface for the final user WebServer - API Gateway - Will be responsible for authentication and authorisation, simple data validation and forwarding the requests to other services, if necessery. Tasks - A service responsible only to store and manage tasks In this series we will start from the Tasks service. After we have it ready we will work on our API Gateway and later on we will build an interface for the user.\nTasks service This is a service that will not be reachable from the outside of our network. It is only responsible to store and manage Task related data. One of the biggest advantages of the microservices architecture is that each service can be implemented with different programming languages. Although, in your team, you will want to keep it to the languages that everybody knows, you could still take advantage of using different languages to solve different things.\nFor this service I picked Python and Thrift as the RPC. I will not explain the details of Thrift and why I picked it, and focus more in the implementation, so I suggest you take your time to check some articles about it, or other related RPC technologies, like gRPC.\nIf you prefer to use other language to develop this service, feel free to convert the python syntax to you language of choice. As long as Thrift supports it, everything should work almost exactly the same. If you find some trouble because of the difference, I suggest that you hit the Thrift tutorial for more examples in your language. Keep in mind that the thrift examples are not very updated and you may need to explore a try slight differences in the code to get it working.\nIf you are on a Mac, you can use brew install thrift to download and install the thrift code generator.\nThrift defines its service-client contract API through a thrift file. For our service we will start from there:\n// Basic struct that we will use struct Task { 1: optional string id, 2: string userId, 3: optional string name = \u0026#34;\u0026#34;, 4: optional string createdOn, 5: optional bool done } // An base exception class exception BaseException { 1: i32 code, 2: string message } // Here is our Tasks service definition. Just like an interface definition // it will give us the signature of the service. With Thrift you, after processing // the file service Tasks { //List Tasks list\u0026lt;Task\u0026gt; all(1:string userId), //Add Task Task add(1:string userId, 2:string name), //Update Task update(1:string taskId, 2:string name, 3:bool done, 4:string userId) throws (1:BaseException ouch) //Upsert Task upsert(1:Task task) throws (1:BaseException ouch) } Save this content to a tasks.thrift fileo on the root of your repo and use the thrift cli to generate your server stub\nthrift --gen py tasks.thrift This command will generate a gen-py folder with many files in it\n. ├── __init__.py └── tasks ├── Tasks-remote ├── Tasks.py ├── __init__.py ├── constants.py └── ttypes.py 1 directory, 6 files When I was developing this service I faced many issues while figuring out how to work with Thrift and use it for real. Here I will make it all simple for you.\nI suggest that you start by creating an virtualenv and install the dependencies there. This way you will not polute your computer\u0026rsquo;s global scope with unecessary dependencies. Here is the requirements.txt\nthriftpy pymongo In order to have a flexible configuration parameter, we will use a configuration file called config.py\nimport os MONGO_HOST = os.environ.get(\u0026#39;MONGO_HOST\u0026#39;, os.environ.get(\u0026#39;MONGO_PORT_27017_TCP_ADDR\u0026#39;, os.environ.get(\u0026#39;MONGODB_PORT_27017_TCP_ADDR\u0026#39;, \u0026#39;localhost\u0026#39;) ) ) MONGO_PORT = os.environ.get(\u0026#39;MONGO_PORT\u0026#39;, os.environ.get(\u0026#39;MONGO_PORT_27017_TCP_PORT\u0026#39;, os.environ.get(\u0026#39;MONGODB_PORT_27017_TCP_PORT\u0026#39;, \u0026#39;27017\u0026#39;) ) ) MONGO_DB = os.environ.get(\u0026#39;MONGODB_DATABASE\u0026#39;, \u0026#39;tasks-db\u0026#39;) print(MONGO_HOST) print(MONGO_PORT) class Config: @staticmethod def getTaskDBConfig(): return { \u0026#39;host\u0026#39;: MONGO_HOST, \u0026#39;port\u0026#39;: MONGO_PORT, \u0026#39;db\u0026#39;: MONGO_DB } @staticmethod def getTaskServiceConfig(): return { \u0026#39;host\u0026#39;: \u0026#39;0.0.0.0\u0026#39;, \u0026#39;port\u0026#39;: 6001 } If you took time to read the code you probably noticed the following:\nMONGO_HOST = os.environ.get(\u0026#39;MONGO_HOST\u0026#39;, os.environ.get(\u0026#39;MONGO_PORT_27017_TCP_ADDR\u0026#39;, os.environ.get(\u0026#39;MONGODB_PORT_27017_TCP_ADDR\u0026#39;, \u0026#39;localhost\u0026#39;) ) ) MONGO_PORT = os.environ.get(\u0026#39;MONGO_PORT\u0026#39;, os.environ.get(\u0026#39;MONGO_PORT_27017_TCP_PORT\u0026#39;, os.environ.get(\u0026#39;MONGODB_PORT_27017_TCP_PORT\u0026#39;, \u0026#39;27017\u0026#39;) ) ) This will be used to setup our connection with the database. If you are familiar with python you will probably noticed that there are a few defaults to this file, and also a few options available for each variable. Once we get to the Docker part this will be much easier to understand.\nWe will use Pymongo as our Database interface. Here is the code to implement the basics for the database. Save it as db.py:\nfrom pymongo import MongoClient from bson.objectid import ObjectId import pymongo import datetime from config import Config import logging logger = logging.getLogger(__name__) mongoConfig = Config.getTaskDBConfig() baseUrl = \u0026#39;mongodb://{}:{}\u0026#39;.format(mongoConfig[\u0026#39;host\u0026#39;], mongoConfig[\u0026#39;port\u0026#39;]) logger.debug(\u0026#39;--- MONGO URL: {}\u0026#39;.format(baseUrl)) database = mongoConfig[\u0026#39;db\u0026#39;] db = MongoClient(baseUrl) client = db[database] class TaskDB: @staticmethod def all(userId): return client.tasks.find({\u0026#34;userId\u0026#34;: userId}).sort(\u0026#34;createdOn\u0026#34;, pymongo.DESCENDING) @staticmethod def addOne(userId, name): instance = {\u0026#34;userId\u0026#34;: userId, \u0026#34;name\u0026#34;: name, \u0026#34;createdOn\u0026#34;: datetime.datetime.utcnow(), \u0026#34;done\u0026#34;: False} instance_id = client.tasks.insert_one(instance).inserted_id instance[\u0026#34;_id\u0026#34;] = instance_id return instance @staticmethod def updateOne(id, userId, name=None, done=None): print(\u0026#39;updateOne(%s,%s,%s,%s)\u0026#39; % (id, name, done, userId)) criteria = {\u0026#34;userId\u0026#34;: userId, \u0026#34;_id\u0026#34;: ObjectId(id)} update = {} if (name is not None): print(\u0026#39;setting name as %s\u0026#39; % name) update[\u0026#39;name\u0026#39;] = name if (done is not None): print(\u0026#39;setting done as %s\u0026#39; % done) update[\u0026#39;done\u0026#39;] = done result = client.tasks.update_one(criteria, {\u0026#39;$set\u0026#39;: update}) instance = None if (result.matched_count \u0026gt; 0): instance = client.tasks.find_one(criteria) return instance The both files described above will create a simple database manager and a configuration file that you could customize according to your idea. Take note that the DB manager is only connecting at startup and is not managing any connection exceptions or connection failures. This is not good for a production environment, and you should strive to handle failure and reconnect whenever your database connection fails, but this is not the scope of this post so I will leave this for you to change.\nWith this two classes already implemented, what is missing is just a Thrift interface to start our server and connect with the database. save this as server.py\nimport thriftpy tasks = thriftpy.load(\u0026#34;./tasks.thrift\u0026#34;, module_name=\u0026#34;tasks_thrift\u0026#34;) from thriftpy.rpc import make_server from thriftpy.protocol import TJSONProtocolFactory import logging logging.basicConfig() from db import TaskDB from config import Config class TaskHandler(object): def check(self): hc = tasks.Healthcheck() hc.ok = True hc.message = \u0026#34;OK\u0026#34; return hc def all(self, userId): print(\u0026#39;getting all tasks for user: %s\u0026#39; % userId) cursor = TaskDB.all(userId) result = [] task = None for t in cursor: task = tasks.Task() task.id = str(t[\u0026#39;_id\u0026#39;]) task.name = t[\u0026#39;name\u0026#39;] task.createdOn = t[\u0026#39;createdOn\u0026#39;].isoformat() task.userId = t[\u0026#39;userId\u0026#39;] task.done = t[\u0026#39;done\u0026#39;] result.append(task) return result def add(self, userId, name): print(\u0026#39;add(%s,%s)\u0026#39; % (userId, name)) instance = TaskDB.addOne(userId, name) task = TaskHandler.convertInstance(instance) return task def update(self, id, name, done, userId): print(\u0026#39;update(%s, %s, %b, %s)\u0026#39; % (id, name, done, userId)) instance = TaskDB.updateOne(id, name, done, userId) if (instance == None): exception = tasks.BaseException() exception.code = 404 exception.mesage = \u0026#39;Task not found\u0026#39; raise exception task = TaskHandler.convertInstance(instance) return task def upsert(self, task): print(\u0026#39;upsert(%s)\u0026#39; % (task)) if (task is None): exception = tasks.BaseException() exception.code = 400 exception.message = \u0026#39;Task data is invalid\u0026#39; try: if (task.id is not None): instance = TaskDB.updateOne(task.id, task.userId, task.name, task.done) else: instance = TaskDB.addOne(task.userId, task.name) except (Exception): exception = tasks.BaseException() exception.code = 400 exception.message = \u0026#39;Unkown error\u0026#39; raise exception print(instance) if (instance is None): exception = tasks.BaseException() exception.code = 404 exception.message = \u0026#39;Task not found\u0026#39; raise exception task = TaskHandler.convertInstance(instance) return task @staticmethod def convertInstance(instance): task = tasks.Task() task.id = str(instance[\u0026#39;_id\u0026#39;]) task.userId = instance[\u0026#39;userId\u0026#39;] task.name = instance[\u0026#39;name\u0026#39;] task.createdOn = instance[\u0026#39;createdOn\u0026#39;].isoformat() task.done = instance[\u0026#39;done\u0026#39;] return task host = Config.getTaskServiceConfig()[\u0026#39;host\u0026#39;] port = Config.getTaskServiceConfig()[\u0026#39;port\u0026#39;] print(\u0026#39;Server is running on %s port %d\u0026#39; % (host, port)) server = make_server(tasks.Tasks, TaskHandler(), host, port) server.serve() You will notice that we finally started to use our thrift generated server skeletton in the first few lines. The TaskHandler class will be the one responsible to receive thrift transported data and implement all the methods declared in our service interface inside our thrift file.\nWith these files you have a very simple service that will handle our Task related data. To start your server you will need first to install MongoDB and then type python server.py in your terminal. As this is a Thrift service, you will need to implement your client. This is an example client that you can use for testing.\nimport thriftpy tasks_thrift = thriftpy.load(\u0026#34;tasks.thrift\u0026#34;, module_name=\u0026#34;tasks_thrift\u0026#34;) from thriftpy.rpc import make_client client = make_client(tasks_thrift.Tasks, \u0026#39;127.0.0.1\u0026#39;, 6000) # Add a task client.add(\u0026#39;userid\u0026#39;,\u0026#39;name\u0026#39;) list = client.all(\u0026#39;userid\u0026#39;) for l in list: print l # Add other functions you want to test # client.update() Save as client.py and start it in another terminal session python client.py\nNow, having all this stuff installed in our computers is not really practical. With time you end up with a computer install with many dependencies and you don\u0026rsquo;t really use most of them after a while. This is one of the many benefits of using Docker. Go ahead an install it in your machine.\nFor the purpose of this tutorial I created a simple Dockerfile that bundles our Task service in a docker container. Here it is:\n# For now, we can use onbuild kind of image, this is not advised for production FROM python:2-onbuild CMD [\u0026#34;python\u0026#34;, \u0026#34;./server.py\u0026#34;] EXPOSE 6001 This is a simple file useful for development. Don\u0026rsquo;t use this for production. As you learn more and more about Docker, you will understand that each Docker container should be treated as a remote server, and all the production requirements that comes with it like:\nHandling failure Logging Monitoring Handling database connections If you are familiar with Linux servers and deploying applications in production, you already know how to do all this. Explaining these are out of the context of this post and we might explore in a later post.\nSave the Dockerfile as Dockerfile in the root of your repo and type docker build -t tasks . to build your docker image.\nNow, you will also want to run your database as a docker container as well, and we will run both of them using docker compose\nversion: \u0026#39;2\u0026#39; services: tasks: image: tasks ports: - 6001 environment: MONGO_HOST: \u0026#39;mongo\u0026#39; depends_on: - mongo links: - mongo mongo: image: mongo Save this file as docker-compose.yml in the root of the repo and now you can launch your service, together with a mongo database, from the root of your repo, using the following command:\ndocker-compose up It should be running now in the foreground. If you want it running in the background:\ndocker-compose up -d Now, you might notice that your docker container has its own IP address and ports, so you client file will not work if not changed. To easily test your file you can enter your docker container using the following commands\n# you will need to list your containers first docker ps # grab the name of your tasks container, should be something like tasks_tasks_1 docker exec -it tasks_tasks_1 bash You will notice that you now are currently inside the container but still you can run your client file from within using\npython client.py This project\u0026rsquo;s code is available in this github repo\nhttps://github.com/danielfbm/tasks-py\nSummary Altough we didn\u0026rsquo;t explore much in details, In this post we talked about a few topics:\nMicroservices architecture Thrift Tasks service using MongoDB Docker basics In the next post we will explore:\nAPI gateway REST API User authentication User signup and login Connecting to our Tasks service Stay tunned.\n","permalink":"https://danielfbm.github.io/post/post-task-manager/","summary":"In this post we will explore some basic concepts while working with a microservices architecture in a distributed application. If you are not very familiar with microservices and the motivation behind it, you should take a look at this series of posts from NGINX that gives a great introduction on the subject.\nAll the concepts are quite simple and although the autor had sugested a few things to get started I will take you through the construction of a simple microservices app that you could deploy anywhere.","title":"Tasks app and microservices architecture"},{"content":"Introduction Well, let me first introduce myself. My name is Daniel, I am rather happy Brazilian currently living in Beijing, China.\nI am technology fan, and from childhood I started to become very interested in computers and other eletronic gadgets. My working experience is mostly in Software Development. In the beginning it was mostly focused on the .NET platform (from 2004 until 2011) on the Windows environment, later on I started to learn Unix systems, software testing. The programming l use in my daily work are mostly Python, Node.js, and recently I started to learn Go as well.\nCurrently I am working daily with Docker as well, so I will also publish the things I learn about this awesome tool and how to make development easier.\nI created this blog with the intent to share things as I learn them, and nothing better to start as sharing on how I built this blog and what are the things I am planning to write in the next few weeks\nHow I built this blog In the past, I had many different blogs but I never really was interested in sharing anything in particular. Now that I am more mature, almost 30, and more involved with interesting things, I believe it is the right time to start a new blog.\nTL; DR Hugo, Markdown and GitHub Pages are amazing tools to get started fast, and build your own blog without any maintainance cost. Deploy online building with Wercker and we are done.\nEngine My first struggle before I even started was to find a nice platform to host the blog. I could use other stuff available like Blogger, Wordpress, and the likes, but there are a few problems:\nMost of these platforms are not available in China due to the Great Firewall of China\nThey are not geeky enough. I had some experience with helping friends build their websites/blogs using those CMS tools, and the general experience is not that pleasant.\nMy second option was to start from scratch: Pick a language, like Python, a framework, like Django or flask, and code everything myself. I took some time to consider and in the end I decided to use something different, something I had not tried before. With some googling I discovered Jekyll, Github Pages, and I immediately started to find ways to get it working and to start developing.\nI basically had a good start, but after a incident, I lost my computer before I could commit and push to Github :(\nA few more things happened, and I finally have a computer again to play with but a busy project at my current job kept me busy enough. We just had some holidays here in China the last few days, so I decided to get started all over again. While looking for Jenkyll and some other material, I met Hugo\nIt was a good match. First because its performance seemed very impressive, all comments around the tool are good, and mostly because it achieves the same that Jekyll achieves, but it was developed with Go, which I am very intested.\nWith one line of code and I downloaded the cli with Homebrew as you can check here.\nbrew update \u0026amp;\u0026amp; brew install hugo You can also check their Quickstart to get a few more details on the process.\nAfter that I just started a new Hugo site with huge new site blog in a folder, and the basic structure was already there: v0.16 at the time of writting\n$ tree . ├── archetypes ├── config.toml ├── content ├── data ├── layouts ├── static └── themes 6 directories, 1 file All the folders are empty, but the basic structure was already there.\nNow lets get some theme to start of. I currently use HugoMDL, but honestly this is not what I really desire, so I might change this later on. Here you find some nice options to get you started.\nI will pick Robust to get started. Head to your blog folder using the Terminal, and type the following:\ngit clone https://github.com/dim0627/hugo_theme_robust themes/hugo_theme_robust This will download the theme into the themes folder inside the blog folder.\nNow, to avoid having the .git files of the theme mixed with your own blog files you can do the following:\nrm -rf themes/hugo_theme_robust/.git This will basicaly remove all the git files referencing the theme. You could also fork the theme repo and customize and then push it to Github.\nIt is very important to create a config.toml. Some themes will provide a template for you, others will give you an example in the themes Github repo. This is the case with Robust. Open the Robust repo and check the example\nbaseurl = \u0026#34;http://hugo.spf13.com/\u0026#34; title = \u0026#34;Hugo Themes\u0026#34; author = \u0026#34;Steve Francia\u0026#34; copyright = \u0026#34;Copyright (c) 2008 - 2014, Steve Francia; all rights reserved.\u0026#34; canonifyurls = true paginate = 3 [params] disqusShortname = \u0026#34;your disqus id.\u0026#34; # optional Replace all the data with your own and save it as config.toml in your blog\u0026rsquo;s root folder.\nThe example configuration is missing just one important detail: The theme we are using. Change the configuration file to the following:\nbaseurl = \u0026#34;http://hugo.spf13.com/\u0026#34; title = \u0026#34;My Blog\u0026#34; author = \u0026#34;The author\u0026#34; copyright = \u0026#34;Copyright (c) 2016, The author; all rights reserved.\u0026#34; canonifyurls = true paginate = 3 theme = \u0026#34;hugo_theme_robust\u0026#34; [params] disqusShortname = \u0026#34;your disqus id.\u0026#34; # optional As you can see I already changed a few things and added the theme = \u0026quot;hugo_theme_robust\u0026quot;. If you chose another theme just remember to set the theme as the theme\u0026rsquo;s folder name and you are good.\nYou can check your website/blog by typing in your terminal:\nhugo server You can specify the port you want to use, and there are other nice stuff you can use with the hugo server command, so check the options with hugo server --help\nNavigate to localhost:1313 in your browser and you should see something like this:\nNow, before we create our first post, let\u0026rsquo;s explore how Hugo works with content. I highly suggest that you take some time to read through the documentation for the content part at least.\nIn summary, you can create a few folders in the content folder to split your website\u0026rsquo;s content in different sections. Those sections will be used in the URL to find the content. Some themes will list the posts inside a folder if accessing the folder name in the URL, others will just give you a 404 error. In this sense Robust seems to do a good job. So with a folder structure like the one below inside the content folder:\n$ tree . ├── blog │ └── new.md └── test.md 1 directory, 2 files You should get the following urls to see each one:\n- baseurl/blog/new - baseurl/test Before we start, notice that the archetypes folder is really important and will save you a lot of time. Our current folder is empty, and our theme doesn\u0026rsquo;t even have one to start. From the Robust github page let\u0026rsquo;s see their example:\n+++ title = \u0026#34;Getting Started with Hugo\u0026#34; description = \u0026#34;\u0026#34; tags = [ \u0026#34;go\u0026#34;, \u0026#34;golang\u0026#34;, \u0026#34;hugo\u0026#34;, \u0026#34;development\u0026#34;, ] date = \u0026#34;2014-04-02\u0026#34; categories = [ \u0026#34;Development\u0026#34;, \u0026#34;golang\u0026#34;, ] image = \u0026#34;image.jpg\u0026#34; # optional toc = false # optional. +++ Contents here Now lets create an archetype for this example. Create a file archetypes/default.md just changing a little bit the content:\n+++ title = \u0026#34;New post\u0026#34; description = \u0026#34;\u0026#34; tags = [ \u0026#34;tag1\u0026#34;, \u0026#34;tag2\u0026#34;, ] date = \u0026#34;2014-04-02\u0026#34; categories = [ \u0026#34;defaultcategory\u0026#34;, \u0026#34;category1\u0026#34; ] draft = true image = \u0026#34;image.png\u0026#34; # optional toc = false # optional +++ Contents here Head back to the terminal and type to following to create a post:\nhugo new post.md This will create a new file contents/post.md and if you run your server now you will not be able to see it. I added a draft = true flag to the archetype, and Hugo will only build the file once you set this flag to false. If you want to preview your draft, you can use the flag --buildDrafts with the server command.\nNow open your favorite text editor or markdown editor and edit the contents/post.md file. You should see something like this:\n+++ categories = [\u0026#34;defaultcategory\u0026#34;, \u0026#34;category1\u0026#34;] date = \u0026#34;2016-06-19T12:22:46+08:00\u0026#34; description = \u0026#34;\u0026#34; draft = true image = \u0026#34;image.png\u0026#34; tags = [\u0026#34;tag1\u0026#34;, \u0026#34;tag2\u0026#34;] title = \u0026#34;post\u0026#34; toc = false +++ Contents here I use two markdown editors that I like a lot, one is Typora which is a WYSIWYG editor (OSX and Windows), the other one is MacDown, (OSX). Both are great, and if you are just starting with markdown, I would recommend you to use MacDown. It\u0026rsquo;s help file is all written in Markdown so you can have a good grasp on how it works.\nEdit the contents as you like and once you are ready just change the draft flag to false and you should be able to see your blog post.\nI downloaded an image from Wikimedia Commons and saved in the folder static/images/image.png and the image.png in your post.md file will display it as a header.\nTake some time to explore more and more the functionalities of Hugo and it\u0026rsquo;s content management, and once we are ready, it is time to deploy to Github Pages\nDeploying to Github Pages Go ahead and setup your Github account if you don\u0026rsquo;t have one. After that, head to Github Pages and follow the steps to setup your repository to keep the website files.\nInitialize your blog folder with git init if you haven\u0026rsquo;t done that. Head to the Wercker website and signup with your Github account.\nAfter signup, choose to Create your first application or the Create/Application menu. Select the account and the repository you want to setup.\nIn the next step select the option wercker will checkout the code without using and SSH key and click on Next step. Click Finish and we can start creating our build script\nBack to your project root folder, create a wercker.yml file using the following template, but remember to change part of the content if necessary:\nbox: debian build: steps: - arjen/hugo-build: version: \u0026#34;0.16\u0026#34; theme: hugo_theme_robust flags: --buildDrafts=true Commit and push. You should see the result in the page very quckly, and give it one or two minutes to complete your build:\nThere are a few more steps in order to fully publish the project. Change your wercker.yml build script to:\nbox: debian build: steps: - arjen/hugo-build: version: \u0026#34;0.16\u0026#34; theme: hugo_theme_robust flags: --buildDrafts=true deploy: steps: - install-packages: packages: git ssh-client - halberom/gh-pages@0.2.3: token: $GIT_TOKEN domain: example.com #optional branch: master #option branch to deploy after building. Will override files basedir: public Note: If you are publishing into the useraccount.github.io repository, then you should push to some other branch, like dev and change the wercker.yml adding branch: master. If you are using hugo for a project page, then the HTML should be published to the gh-pages branch\nChange the domain variable to a domain that you own or remove the key to use the github.io one. The basedir here tell Wercker to publish the built files to this subfolder. Now head to Github and lets create a token to be able to deploy. Follow this steps to create it, and select only repo in the scopes section.\nAfter generating the key, go back to the Wercker page and tap on the settings button\nTap on Environment Variables and add your github token to the GIT_TOKEN key. You can select Protected.\nThe first time we pushed our project Wercker already created our build pipeline, so now we need to change it a little bit in order to deploy. Still in the Settings page click on Workflows. Create a new Pipeline by click on Add new pipeline in the box below\nThen add as follows:\nAdd the pipeline to your current workflow:\nCommit and push and your project should be built automatically. From now on, everytime you push changes to master for a project page, or any other branch for a username.github.io page, Wercker will take care of building and deploying everything for you.\nNote: If you are publishing into the useraccount.github.io repository, then you should push to some other branch, like dev and change the wercker.yml adding branch: master. If you are using hugo for a project page, then the HTML should be published to the gh-pages branch\nNext steps For my next post I will introduce a sample project which will be split in a series of posts actually. We will build a Tasks application. In this series we will learn a few things including:\nA Simple Microservices Architecture: Building for Scale; How to build a Task backend service using MongoDB as datastore; How to build a API Gateway for our Microservices App; How to build a Web UI using Angular.js and Angular Material; How to optmize our development routine using Docker and other tools; How to apply some basic Continuous Integration in a different way; How to setup and deploy automatically all of our services independently; Note: I might change the content and subjects, and also I am open for any suggestions. If you would like to more about some topic, please leave a comment below.\n","permalink":"https://danielfbm.github.io/post/new-start/","summary":"Introduction Well, let me first introduce myself. My name is Daniel, I am rather happy Brazilian currently living in Beijing, China.\nI am technology fan, and from childhood I started to become very interested in computers and other eletronic gadgets. My working experience is mostly in Software Development. In the beginning it was mostly focused on the .NET platform (from 2004 until 2011) on the Windows environment, later on I started to learn Unix systems, software testing.","title":"How I setup this blog"}]