- I built a sandbox to test integration platforms. -
-September 5, 2025
- -diff --git a/.gitignore b/.gitignore index 0ac3294..94652ff 100644 --- a/.gitignore +++ b/.gitignore @@ -89,4 +89,6 @@ lerna-debug.log .DS_Store Thumbs.db -*.obsidian* \ No newline at end of file +*.obsidian* +_site/ +_site/* \ No newline at end of file diff --git a/.vscode/settings.json b/.vscode/settings.json index 62aec04..ba4d87a 100644 --- a/.vscode/settings.json +++ b/.vscode/settings.json @@ -5,5 +5,14 @@ // this file breaks if it's formatted "[sitemap]": { "editor.formatOnSave": false - } + }, + "spellright.language": [ + "en" + ], + "spellright.documentTypes": [ + "latex", + "plaintext", + "markdown" + ], + "djlint.showInstallError": false } \ No newline at end of file diff --git a/_site/approach/index.html b/_site/approach/index.html deleted file mode 100644 index f6328d2..0000000 --- a/_site/approach/index.html +++ /dev/null @@ -1,168 +0,0 @@ - - -
- - -
- Every problem is unique, so it's important to get a clear picture from the
- start. That way, we can adapt our approach to fit your situation.
-
-
- Note: You can expand every step below the image to see more details.
-
-

- We'll begin with a intro meeting. This is an open conversation about your goals, the challenges you're facing and what you are looking for. We'll explore wether and how we can work together. - -
-
- If the next step isn't clear yet, we can dive deeper with a Discovery Hackathon* During this hands-on session we'll further explore your goals, challenges, systems & processes together. This also means sketching out solutions or testing assumptions to create a shared understanding. Whether or not we end up working together, you'll walk away with new insights. And I'll be able to make a realistic proposal.
-
-
- *Timeboxed between 2 - 4 hours
-
-
- I'll create a proposal outlining the scope, deliverables, timeline, and budget. We'll go through it together to make sure we're on the same page. - -
-- Once we agree on the proposal, we'll finalise the formal things with a simple agreement. - -
-- We schedule the start date, set up communication channels, and get started. You'll know exactly what to expect and when. - -
-- I'll deliver work in close collaboration with you. You'll get regular updates where I'll ask for feedback. - -
-- Once I've delivered what we have agreed upon, we'll do a retrospective. From there, we can discuss what next step best fits your needs. We either wrap things up, start a new phase, or move into a support/maintenance phase. - -
-- With over 14 years in IT, I've held various roles from sysadmin and - application management to consultancy. Along the way, I achieved part-time - degrees in IT service management and IT & business. I enjoy the challenge of a - good problem, analysing symptoms, identifying what's wrong and finding a - solution. -
-- Over time, I naturally moved toward data integration work (which I find most - enjoyable). I believe that the most transformative tech solutions begin with a - deep understanding of the problem. Thanks to my broad experience, I bring both - technical and business perspectives to every project. This allows me to - understand your challenges from multiple angles and deliver solutions that - stick. -
- -September 5, 2025
- -May 28, 2025
- -May 26, 2025
- -I'd love to hear from you!
- - - Whether you have a project in mind, have feedback on my content, want to learn - more about my services or just feel like chatting about data integration. -I help businesses run more efficiently by bridging the gap between their -data and the actions they take.
-I achieve this in two key ways:
-First, I automate processes that involve multiple systems, eliminating manual errors, delays and repetitive tasks. This directly reduces your operating costs and minimises risks.
-Second, I help gather clear and actionable insights derived from data across different systems. This empowers you to make smarter decisions that improve your organisation's performance.
-🌍 Adapting to La Vie Française
-Only just emigrated to France in October 2024, I'm adapting to my living my life in Burgundy and currently learning French (A2 and counting!).
-⛰️ Outdoors
-I love the outdoors! Whether it's Hiking, cycling, motoring or just walking une promenade
-Let’s talk about how we can improve your business with data driven solutions!
- -September 5, 2025
- -Say you're in the market for a new integration solution and you want to try a few out before committing. Nearly every platform offers demos or trials. But what then? How are you going to decide whether to fully invest (time, money, training) based on a limited trial experience that may not reflect real-world usage?
-In my experience working with clients, demos are polished to look good, but nothing beats hands-on experience. For trials to succeed you need something meaningful to test. Setting up proper test environments often requires at least VPN access, permissions for other environments or cloud services and IT approvals. This can be challenging and time consuming. So it's tempting to fall back on 'foo', 'bar' examples or the Pokemon API. But will this paint a clear enough picture?
-This challenge has led me to build an integration sandbox. The sandbox provides the mock endpoints to test against, so I can test integration flows immediately. My goal was to evaluate how platforms handle common integration patterns:
-By testing these features I expect to gain insight into a platform's general usability:
-Note: This leaves out performance and scalability. Any serious performance testing would require enterprise-scale infrastructure and realistic data volumes beyond this evaluation's scope.
-To test these features in a real world (but somewhat simplified) example, I thought of a use case in Transport and Logistics. Specifically the integration between a Shipper and a Broker.
-Imagine you are a Shipper with a TMS that needs to send orders to a Carrier. The Carrier requires all communication to go through their preferred Broker (visibility platform). -The integration platform sits in the middle, translating the TMS data to the Broker and vice versa.
-- sequenceDiagram - participant TMS as TMS / Shipper - participant IP as Integration platform - participant VP as Broker / Visibility platform - - box transparent Sandbox - participant TMS - end - box transparent Sandbox - participant VP - end - - TMS->>IP: New shipment - IP->>VP: Create order - VP->>IP: New event - IP->>TMS: Create event --
The sandbox mocks both the TMS and Broker ends of the integration use case and has REST API endpoints to authenticate, seed, trigger, get and create either TMS shipments or Broker events. It's the job of the integrator to make both mock systems work together. Here's an example of a process flow that you can integrate:
-
-flowchart TD
-A@{ shape: circle, label: "start" } --> B
-B@{ shape: rect, label: "get new shipments" } --> C
-subgraph for each shipment
- C@{shape: lean-r, label: "transform to order"} --> D
- D@{shape: rect, label: "post order"} --> E
- E@{shape: rect, label: "log result"}
-end
-E --> F@{shape: diam, label: "success?"}
- F --> |Yes| G@{shape: framed-circle, label: "End"}
- F --> |No| H@{shape: rect, label: "Handle errors"}
-
-
-I designed the sandbox with simplicity in mind. It should also be easy to maintain and test for a single developer. I wanted to run it in a container and have the possibility to deploy and use it anywhere. At this stage I'm not really concerned about high performance.
-The mock APIs are built with Python and FastAPI. I chose FastAPI because it goes hand in hand with Pydantic dataclasses and has a complete set of features like security, easy serialisation and deserialisation of json and the automatic generation of swagger docs. The TMS and Broker endpoints both use different JSON payloads that are generated using the Faker library. The generated data is saved in a SQLite database so that I can later validate the incoming transformations against a set of business rules. Users will get a corresponding HTTP response code with the result of their requests. If something fails users get detailed error messages.
-Want to try it yourself? The sandbox is available as a Docker image:
-docker run -d -p 8000:8000 atetz/integration-sandbox:latest
Once running, you can access the API documentation at http://localhost:8000/docs and start building your integration flows immediately. The mapping specifications can be found in the repo!
-I also have it running in AWS Lightsail with minimal effort.
In the next weeks I'm going to put it to the test with Fluxygen, Azure Logic Apps and n8n.
-What do you think? I'd love to hear your thoughts, experiences, or even just a quick hello!
- -May 28, 2025
- -While testing out my new Beeline Moto II motorbike navigation, I ran into some compatibility issues with my routes created with MyRouteApp. Namely, losing the turn-by-turn navigation while going off track. My short roadside frustration turned into a deep dive into GPX files and how to integrate the MyRouteApp format with my Beeline. Upon inspecting both files, I noticed a difference in the GPX file structure. Since a GPX is defined in XML, I decided to make a tool in vanilla JavaScript and XSL that will transform the file for me. And since there are other users with the same issue, I thought it would be nice to share my solution and make it available to anyone that can benefit from it. You can find the tool here
-
But, I wanted a more minimalistic device in my cockpit and I liked the idea of having my phone in my pocket instead of on my bike in case of an emergency. -So after some "very deep research" on YouTube and Google, I naturally found (or was influenced towards...) the Beeline Moto II.
-Fast forward to unboxing and using the Beeline. I was super hyped. I exported my GPX file from MyRouteApp and hit the road. And all went surprisingly well.
-
What on earth? Why is my newest piece of navigation technology not navigating? So I stopped, grabbed my phone and opened the Beeline app. Skipping a waypoint wasn't an option because I only had 1 waypoint. Hmm... I opened my maps app and after memorizing some villages I got back on track and my turn by turn navigation was restored. Sweet!
-On my way home my inner problem solver was already working. Did I export my file wrong? Did I forget to check a box on importing?
-Soon I learned that other users had similar issues combining MyRouteApp and Beeline, and that their forum topics hit a dead end. They noted that their route seemed to be converted to a track, and only had a start- and end-point. I also learned that it was impossible to import a Beeline GPX in the MyRouteApp. I tried a couple of things on the MyRouteApp end without success. Then on the Beeline support page I found an article on importing and exporting a GPX with this note:
-Please note: you can only edit GPX-imported routes within the Beeline app if you are using the "Waypoints only" import mode. You can learn more about that mode in the article listed above. - --
Waypoints only import mode? I didn't see that option at all! But surely my GPX had waypoints? During the creation of my route I added 30 or so...
-You might be thinking: Why aren't you using the Beeline app anyway? While the Beeline app comes with a route creation functionality, I find the feature set of MyRouteApp superior. I want to skip dirt roads, maximize twisty roads, maximize elevation, toggle different points of interest along the way like petrol stations etc.
-Carrying on with my problem, I created a small test route in the Beeline app and exported a GPX file. Since the GPX file is actually a XML file, I shouldn't have any trouble figuring out what the differences are.
-This is a snippet of the Beeline GPX export without the XML declaration and namespaces:
-.....
-<!-- The route waypoints -->
- <wpt lat="47.765573749816014" lon="4.5727777081385605"/>
- <wpt lat="47.76747611439015" lon="4.570651287011771"/>
- <wpt lat="47.84947693295322" lon="4.568749244689883"/>
- .....
- <!-- The route -->
- <rte>
- <rtept lat="47.76591" lon="4.57288"/>
- <rtept lat="47.76595" lon="4.5726"/>
- <rtept lat="47.76597" lon="4.57253"/>
- <rtept lat="47.766" lon="4.57248"/>
- <rtept lat="47.76614" lon="4.57222"/>
- .....
-
-Alright, now let's have a look at the MyRouteApp GPX that I'm importing:
-.....
-<rte>
- <name>test</name>
- <rtept lat="47.767945375567" lon="4.5705699920654">
- <name>12 Route de la Jonction, 21400 Nod-sur-Seine, Frankrijk</name>
- <extensions>
- <trp:ViaPoint />
- </extensions>
- </rtept>
- ......
- </rte>
- <trk>
- <name>Track-test</name>
- <trkseg>
- <trkpt lon="4.570600" lat="47.767940" />
- <trkpt lon="4.570710" lat="47.768370" />
- <trkpt lon="4.570720" lat="47.768510" />
- <trkpt lon="4.570710" lat="47.768550" />
- <trkpt lon="4.570710" lat="47.768600" />
-.....
-
-A few things are going on here:
-<wpt> nodes while the MyRouteApp has not.<rte>.<trk>.<rte> segment suspiciously looks a lot like the MyRouteApp <trkseg> because the coordinates are very close to each-other.Both seem to have a different interpretation of the GPX 1.1 Schema Documentation. Looking at the definitions I note 3 things:
-wptType wpt represents a waypoint, point of interest, or named feature on a map.rteType rte represents route - an ordered list of waypoints representing a series of turn points leading to a destination.trkType trk represents a track - an ordered list of points describing a path.Given the definitions and examples above. I find that the Beeline app should be using the rteType instead of individual waypoints for a route. Because that is what it's designed for. Also, the trkType is meant for tracks and it seems Beeline is using rteType for that.
As a good user, I obviously raised a ticket with Beeline, providing as much details as possible. I was then gracefully thanked for my suggestions and informed that my feedback was forwarded up the chain. Great! But knowing that technical feedback like this often gets dismissed as subjective interpretation rather than standards compliance, I knew I had to work on a solution in the meantime.
-Knowing the differences between the formats, the workaround was relatively straightforward: I only had to transform the MyRouteApp GPX to a Beeline GPX. To make my MyRouteApp GPX compatible with the Beeline app I decided to:
-<rtept> nodes to <wpt> nodes.<trkseg> to a <rte>.My first test file was hacked together using some good old copy, paste search and replaces.
-Et voila! Upon importing my newly created GPX I was greeted by another option: "Points de cheminement uniquem...".
-Which translates to the "Waypoints only import mode" mentioned by Beeline above.
-
This methods imports the waypoints added in the MyRouteApp but will re-calculate the route in between them. If you want the Beeline to calculate the same or near similar route, I advise you to take an extra 15 min to add waypoints on every main road change. My approach looked like this:
-
Obviously I wasn't planning on manually editing GPX files every time I wanted to use one of my routes. -So I decided to make a tool that will transform the file for me. And since there are other users with the same issue, I thought it would be nice to share my solution and make it available to anyone that can benefit from it. The solution is simple:
-You can find the result here
-My short road side frustration turned into a deep dive into GPX files and how to integrate the MyRouteApp format with my Beeline. While I hope that Beeline will eventually improve their compatibility, in the meantime my tool will provide a practical solution. If you're facing similar issues, give the tool a try and let me know how it works for your routes. Happy riding!
- -May 26, 2025
- -My goal was to make a simple page that would enable me to write something about myself and what I do. And it should also be a spot where I could share my ideas. I heard about Github pages before and I wanted to give it a go. Naturally I found loads of resources naming jekyll.
-I spun up a repo and tried it out with a simple theme. Soon I noticed that once I wanted to make some changes to the theme, it required some effort to understand and override what the previous designer intended. Should I just have picked something and call it a day? Maybe.
-But there was also a feeling lingering. I wanted something more minimalistic....
-When I was a teenager my "hobby" used to be making websites. Designing a template in a graphic program, slicing it up in 1px images and using HTML + CSS2.
-Nowadays we have HTML5 and CSS3. I heard great stories about grid and flexbox. Which meant the days of fighting <div> are long gone..
So I decided I'd roll my own. With the help of some useful libraries/tools:
-- This tool will convert a MyRouteApp gpx (version 1.1) file to a gpx that is - compatible with the Beeline moto. -
- - Select your MyRouteApp file - - - - - -
- This methods imports the waypoints added in the MyRouteApp but
- will re-calculate the route in between them. If you want the Beeline
- to calculate the same or near similar route, I advise you to take an extra
- 15 min to add waypoints on every main road change. My approach looked like
- this:
-
- 
+
+
+
+flowchart TD
+A@{ shape: circle, label: "start \n(every 10min)" } --> B
+B@{ shape: rect, label: "get token" } --> C
+C@{shape: diam, label: "success?"}
+ C --> |Yes| D
+ C --> |No| E@{shape: rect, label: "handle errors"}
+D@{shape: rect, label: "Save JWT"} --> F
+F@{shape: framed-circle, label: "end"}
+
+
+1. Scheduler starts the process
+2. Get a new token from the /token endpoint
+3. Check the result
+4. Save JWT or handle the unexpected result
+
+#### TMS shipment to Broker order
+The TMS shipments will be pulled periodically from the TMS API and then transformed and delivered to the Broker API.
+
+flowchart TD
+A@{ shape: circle, label: "start" } --> B
+B@{ shape: rect, label: "get new shipments" } --> C0
+C0@{shape: diam, label: "any \nshipments?"}
+ C0 --> |Yes| C
+ C0 --> |No| C2@{shape: framed-circle, label: "end"}
+subgraph for each shipment
+ C@{shape: lean-r, label: "transform to order"} --> D
+ D@{shape: rect, label: "post order"} --> E
+ E@{shape: rect, label: "log result"}
+end
+E --> F@{shape: diam, label: "success?"}
+ F --> |Yes| G@{shape: framed-circle, label: "end"}
+ F --> |No| H@{shape: rect, label: "handle errors"}
+
+
+1. Scheduler starts the process
+2. Get new shipments from the /tms/shipments endpoint
+3. Check for shipments in response
+4. Split shipments payload into a sequence of single shipments (for each)
+ 1. Perform a data mapping to the broker format
+ 2. Create the order with the /broker/order endpoint
+ 3. Log the result
+5. Check the aggregated results for errors and handle if necessary.
+
+#### Broker event to TMS event
+The broker events are sent to a webhook which will transform and deliver them to the TMS API:
+
+flowchart TD
+A@{ shape: circle, label: "start" } --> B
+B@{ shape: rect, label: "check api key" } --> C
+C@{shape: diam, label: "valid?"}
+ C --> |Yes| D
+ C --> |No| E@{shape: rect, label: "return HTTP 401"}
+D@{shape: lean-r, label: "transform to tms event"} --> F
+F@{shape: rect, label: "post event"} --> G
+G@{shape: diam, label: "success?"}
+ G --> |Yes| H@{shape: framed-circle, label: "End"}
+ G --> |No| I@{shape: rect, label: "handle errors"}
+
+1. Inbound HTTP message starts the process
+2. The incoming webhook API token is validated. `X-API-KEY`
+3. Perform a data mapping to the tms format
+4. Create the event with the tms/event/shipment_id endpoint
+5. Log the result
+## Integrating with Fluxygen
+Now that we have laid our groundwork we can actually start integrating. If you want to follow along, you will first have to reach out to [Fluxygen](https://fluxygen.com/schedule-demo/) for a demo account.
+### A quick overview
+I'm not going to describe all the features in detail here. I think that [Fluxygen's academy](https://academy.fluxygen.com/docs/guides/tutorials/try_it_yourself) provides loads of detailed info. And there's also [Luke Saunders's video of Dovetail](https://youtu.be/qAHk_S3iRb8?si=t_sGuU_pjOK82udv) (the former name of Fluxygen) which describes the basics excellently.
+
+Nevertheless, I'd still like to explain some core concepts to give some context.
+Primarily there are 4 screens that users can work with:
+1. Flow manager - Provides high level information of all the flows and lets you view detailed information, such as installation time, errors, successful executions, tracing and logs per flow.
+2. Flow designer - The place where flows are created.
+3. Tenant manager - Lets admins manage users and global settings.
+4. Tenant variables - Create, update and delete global variables.
+
+
+{% gallery "Homescreen" %}
+{% galleryImg "/assets/images/fluxygen-sandbox/fluxygen-homescreen.png", "Fluxygen homescreen", 500 %}
+{% endgallery %}
+
+
+Integrations in Fluxygen enable messages to flow from point A to point B. This is done by creating flows where users can manage the flow of messages and how they are processed. Processing is orchestrated by adding the right components in the right order.
+
+Messages have the following structure (just like HTTP messages):
+- Headers - Contains metadata / header data of the message;
+- Body - Contains the entire message (string or binary).
+The destination of a message is dependent on the next component in the flow or the settings of the component.
+
+There are 4 types of variables in Fluxygen:
+1. On the message level there are *message headers*. These are the dynamic variables within a flow. For example: if I want to store a result of an http call to a temporary variable, I would use the headers.
+2. Messages also have *message properties*. Message properties contain metadata about a message and are only for internal use. These cannot be set. For example BodySize, HeadersSize, timestamp.
+3. *Flow properties* are the static variables of a flow. I primarily use these for base URL's, folder paths, flow specific credentials that don't automatically rotate etc.
+4. *Tenant variables*. These can be seen as global variables. I primarily use these for storing credentials that are used by multiple flows.
+### Building the authentication flow
+As mentioned earlier, the sandbox's APIs requires users to authenticate using OAuth. The type of OAuth is a simple password credentials grant that requires the user to send their username and password in a `application/x-www-form-urlencoded` HTTP POST to the API. If all goes well, the user will get a JWT access token that is valid for 15 minutes.
+
+Since I want to use the access token from multiple flows, I created a new flow called "get token" that retrieves a new token and stores it in the *tenant variables*. Fluxygen lets you install test and production versions of your flows, and each environment can have their own set of flow properties. Because I wanted the API URL, username, and password to be configurable for different environments, I set them up as flow properties instead of hardcoding them. I also set the tracing of the flow to 1 day. This means that I can view a detailed log of the transactions and that this information is kept for 1 day.
+
+
+{% gallery "getTokenOverview" %}
+{% galleryImg "/assets/images/fluxygen-sandbox/get-token-1-overview.png", "Overview of get token flow.", 500 %}
+{% galleryImg "/assets/images/fluxygen-sandbox/get-token-2-flow-properties.png", "Flow properties.", 500 %}
+{% endgallery %}
+
+I chose to schedule the flow for 10 minutes since this will give me 5 minutes to fix any possible issues. After I set the Content-Type and Accept headers, I set the message body to: `username=#{username}&password=#{password}`. Where the `#{variables}` refer to the flow properties. These are added via the blue # sign. The body is then sent to the sandbox's token URL via a HTTP POST using the HTTP component. I enabled *Use error route?* which means that once the HTTP component returns a response code outside of the 200-300 range, It will trigger the error route.
+
+{% gallery "getTokenDetails" %}
+{% galleryImg "/assets/images/fluxygen-sandbox/get-token-3-scheduler.png", "Scheduler details.", 400 %}
+{% galleryImg "/assets/images/fluxygen-sandbox/get-token-4-setheaders.png", "SetHeaders.", 400 %}
+{% galleryImg "/assets/images/fluxygen-sandbox/get-token-5-setBody.png", "SetBody.", 400 %}
+{% endgallery %}
+
+If all goes well we should get an HTTP response code of 200 with a message body that looks like this:
+```json
+{
+ "access_token": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiJzYW5keSIsImV4cCI6MTc1ODQ1MDA1MX0.i3uSNpI84oPJoH7o72gopAuSgsxKCQvA36dj_dj6Nt0",
+ "token_type": "bearer"
+}
+```
+
+At this point in the flow we know that we only get valid http response codes. `access_token` is the part we are interested in saving to the *tenant variables*, so I set this on a header using JsonPath. JsonPath lets you extract specific values from JSON responses. In this case I can get the access token with: `$.access_token`.
+
+But sometimes a valid http status does not necessarily mean that the body is exactly how we want it to be. And I surely do not want to save an empty or invalid value to my variables. To catch these kind of differences I added a header to calculate the length of the token. This time using a [simple expression](https://camel.apache.org/components/4.14.x/languages/simple-language.html): `${header.access-token?.length()}`. Simple is a language shipped with Apache Camel that prevents scripting for simpler use cases.
+
+Next I added a content router that checks if the length of the `access-token-length` header is greater than 0. If so, it will proceed and save the value to the *tenant variables*. Note in the images that I have added the `Bearer ` to the variable. This makes it easier using the value further down the line directly on a `Authorization` header. If not, it stops at a log component. In my example this situation is not handled any further, but this route could for example send a notification or perform some custom handling according to what the business users want to know.
+
+{% gallery "getTokenDetails" %}
+{% galleryImg "/assets/images/fluxygen-sandbox/get-token-7-content-router.png", "ContentRouter", 400 %}
+{% galleryImg "/assets/images/fluxygen-sandbox/get-token-8-set-tenant-var.png", "SetTenantVars.", 400 %}
+{% endgallery %}
+
+
+### Installing and checking the authentication flow
+From the flow designer the play icon on the right will let users install a flow in that environment immediately. Once started, the environment will colour green. To check if the flow runs as it should I can quickly navigate to the flow details via the folder icon next to the stop icon.
+
+{% gallery "getTokenInstalled" %}
+{% galleryImg "/assets/images/fluxygen-sandbox/get-token-9-installed.png", "Installed.", 800 %}
+{% endgallery %}
+
+The flow details show the general stats first. Here we can see the status of the flow, general settings and how many exchanges were completed, pending or failed. The next tab that I use often is the transactions tab. On this tab it's easy to see how many times the flow has executed and also the exact inputs of every component in the flow. Pro tip: Since the tracing only shows the input a component, I like to end a branch of a flow with a log component so that I can see all relevant outputs in the tracing.
+
+{% gallery "getTokenTransactions" %}
+{% galleryImg "/assets/images/fluxygen-sandbox/get-token-11-transactions.png", "Scheduler details.", 400 %}
+{% galleryImg "/assets/images/fluxygen-sandbox/get-token-12-transactions-expanded.png", "Scheduler details.", 400 %}
+{% endgallery %}
+
+
+So far so good! There are no errors and every component seems to have processed how I wanted it to. Let's go to the *tenant variables* screen to check if the flow has saved the access token.
+
+{% gallery "getTokenTenantVars" %}
+{% galleryImg "/assets/images/fluxygen-sandbox/get-token-13-tenant-variables.png", "Tenant variables", 800 %}
+{% endgallery %}
+
+
+Perfect!
+Since auth is working, we can now start building the TMS shipment to Broker order flow.
+### Building the TMS shipment to Broker order flow
+As mentioned earlier, we want to process new shipments on a schedule. But before we dive into creating the flow, we need to make sure that there are new shipments in the sandbox.
+
+We can seed a number of shipments by sending an *authenticated HTTP POST request* to `#{base_url}/api/v1/tms/shipments/seed` with the following body:
+
+```json
+{"count": 100}
+```
+
+For tasks like these and creating proof of concepts in general, I like to use [Postman](https://www.postman.com/) as my HTTP client. If you're a Postman user then you're in luck, I have exported my collection for [anyone to use](/assets/examples/Sandbox.postman_collection.json). It uses a couple of environment variables and has a small utility script that stores the result of the `/token` call into the variables, which prevents me from copying and pasting the Bearer token every 15min.
+
+I gave the flow a clear and descriptive name that matches the process: *new tms shipment to broker order*. For this process I don't have a real business requirement for the time schedule so I decided to go with 5 minutes. The scheduler will trigger the flow as soon as it is installed.
+
+{% gallery "shipmentToBrokerOverview" %}
+{% galleryImg "/assets/images/fluxygen-sandbox/shipment-to-broker-1-overview.png", "Shipment to broker overview.", 800 %}
+{% endgallery %}
+
+Over the development of an integration I will have created and tested many iterations in a short period of time. Preferably I install and test after adding each component and keep the feedback loop as short as possible. Fluxygen has built-in versioning and requires me to create a new version after any change. This makes it very easy to switch between versions and revert back if necessary. This is very useful for experimenting with expressions.
+
+The first priority after the flow triggers is setting the correct credentials for the request. Since the authentication flow already stores the token, I only need to get the right tenant variable and set the value on a header named Authorization.
+
+With the authentication in place, I perform a HTTP GET request to `{base_url}/api/v1/tms/shipments/new?limit=10`. I've added the `limit=10` query parameter to have a nice small sample to work with.
+
+Ideally the API returns a list of shipments, but there are cases where there aren't any new shipments to process. To stop the flow when there are no new shipments, I added a filter that checks if the response body isn't null: `${bodyAs(String)} != 'null'`
+
+Now I can trust that only a list of shipments is passing through the filter, I split the message to process each shipment separately. In this context a split works like a *for each*. I configured the split component with JsonPath `$[*]` and set the *Aggregation* to *JSON*.
+
+{% gallery "aggregatedSplit" %}
+{% galleryImg "/assets/images/fluxygen-sandbox/shipment-to-broker-2-aggregated-split.png", "Aggregated split.", 300 %}
+{% endgallery %}
+
+From that point on all of the components attached to the bottom part of the split component are executed as a sub-process *for each shipment*. The *Aggregation* setting enables me to collect the result of each sub-process. After all shipments have been processed, the aggregated result is sent back to the main process as output of the split component. I can later use this in the main process to check if there were any errors.
+
+The sub-process transforms the shipment and sends it to the broker API.
+One of the things that Fluxygen *unfortunately does not have* is a built in data mapper. Fortunately there are multiple ways to perform a data mapping with some templating or scripting:
+- XML files can be transformed with a XSLT
+- Scripting with JavaScript or GroovyScript
+- Templating with the Velocity templating engine
+
+To stay in the low-code theme, Fluxygen recommends using [Altova MapForce](https://www.altova.com/MapForce) as a mapping tool. MapForce is a very powerful graphical data mapping tool that supports a wide range of data formats. In this case I'll use it to make an XSLT.
+
+You might think: *XSLT?! But we have been working with JSON!* That's correct! In integration projects, the tool of choice often depends on who will maintain the mappings:
+- Do we want business users to be able to modify mappings themselves (low-code)?
+- Are we okay with all changes requiring developer involvement (code)?
+This means for this use case that we'll introduce some format conversion overhead for the sake of maintainability. And while this may introduce other challenges, I'll show how I deal with them to make them less painful.
+
+For the XSLT setup, I first add a *JsonToXMLSimple* component. As the name states, this is a simple component that transforms a JSON body to XML. It has [some quirks](https://academy.fluxygen.com/docs/components/transformations/json_to_xml_simple#array-element-name) but in general I follow this rule:
+- For *one-way conversion* (JSON→XML→XSLT), JsonToXMLSimple is fine
+- For *two-way-conversion* (JSON→XML→JSON) a typed XML with JsonToXML is better.
+
+
+
+Take for example the JsonToXMLSimple component with the following input:
+
+```json
+{
+ "id": 1,
+ "name": "Example",
+ "list": [
+ "a",
+ "b",
+ "c"
+ ]
+}
+```
+
+This will result in:
+```xml
+
+a
+ b
+ c
+
+
+
+
+ `.replace(/(\r\n|\n|\r)/gm, "");
+ return output;
+}
+
+function galleryShortcode(content, name, imgPerCol) {
+ if (imgPerCol === undefined) {
+ const nImg = (content.match(/ 1) {
+ imgPerCol = 3;
+ }
+ }
+ return `
+
+
+