-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature/activemq camel stomp #698
base: develop
Are you sure you want to change the base?
Conversation
adamkorynta
commented
Jun 13, 2024
•
edited
Loading
edited
- Adds apache camel routing from Oracle AQ to Artemis Server.
- Artemis Server is setup to server connections of the CORE protocol and STOMP with httpEnabled to allow WebSocket access.
- added a proof-of-concept node.js client for accessing STOMP messages over WebSockets.
- Added basic authentication that requires the client to provide a username and apikey that match. The CDA user also creates an apikey at startup for the invm connection between Oracle AQ and Apache Camel, the Artemis security manager API doesn't allow for acceptor based security. The key is generated as a UUID.
- added GET endpoint to provide a list of topics that Artemis supports connections from.
- Artemis topics are named "CDA.." with a secondary topic for "CDA..ALL" for clients that want all messages for a given office
- The durableSubscriberName for Apache Camel to connect to OracleAQ uses the Oracle queue name since it needs to be unique per queue while the clientId is the current localhost canonical name. The clientId should be unique between CDA instances but be consistent between restarts so this needs updating for docker containers.
- queues do not startup automatically, instead they start when the first client subscribes to them using an afterCreateConsumer hook.
- server persistence is not enabled in the default broker.xml. This could use the file system on T7 instances or PostgreSQL in the cloud. In order for Artemis clusters to work with each other to ensure durable subscriptions don't miss messages when load balanced, the persistent database needs to be shared with all the nodes in the cluster.
- currently the same DataSource is used for Oracle AQ connections as the rest of CDA. This causes an issue as there needs to be a single connection per Oracle queue, which can get quite aggressive. Future updates should define a separate DataSource sized appropriately.
According to the documentation: https://activemq.apache.org/components/artemis/documentation/2.19.0/using-jms.html tunneling the TCP connection over HTTP is supported by adding Interestingly enough, setting up the client subscription as durable, the client will receive all missed messages before the ClassCastException is thrown (but no further messages are sent to the client). I don't know if this error was fixed in a newer version of Artemis as JDK 11 is required beyond the 2.19.1 version I am testing. I did update the client to the latest version and that didn't change anything. |
…ut through ActiveMQ Artemis utilizes Apache Camel with the AQAPI connection factory in order to hook into Oracle Queues and route to Artemis. TODOs: - determine artemis connection/port - determine which oracle queues to subscribe to - determine clientid/durable subscriber name - register artemis queues with swagger UI and standardize naming artemis architecture docs: https://activemq.apache.org/components/artemis/documentation/2.4.0/architecture.html
…et test. Reworked CamelServletContextListener.java to switch servlet annotation, and revise message connections and endpoints. Added a new package.json file for websocket-testing and a processor for message-to-json processing. Created a new javascript file for testing STOMP over websockets using node.js. This commit lays the groundwork for more effective messaging and debugging.
2da9087
to
4b9bde3
Compare
groups routes for office as well as individual routes add basic security mechanism that requires client to specify username and apikey for the password create a specific cda apikey for authenticating within the vm
4b9bde3
to
7e0cc56
Compare
Known issues that should get addressed:
|