Simple python script to get quotes and minute bars from Alpaca and publish them. Details:
- Quotes are retrieved by polling
- Bars are delivered aynchronously through websockets
- Quotes and bars are printed to stdout (log output goes to stderr)
- A FastAPI server is started on port 8004 (configurable), and prometheus metrics are accessible at
http://localhost:8004/metrics/
- Quote and bar price/volume/size info is published via prometheus guages with symbol as label
- Prometheus counters for quote/bar/trade/error counts are also available
- If kafka is enabled (KAFKA_DISABLE unset), quotes and bars are published to kafka (set KAFKA_BOOTSTRAP)
- Only new quotes are published (i.e. if unchanged it is not published)
First obtain API credentials for your alpaca account and set ALPACA_SECRET
and ALPACA_KEY
environment variables to your secret and key values respectively
As a local pythons script
- [optional] create a venv and activate it
pip3 install -r requirements.txt
- Configure STOCK_SYMBOLS, KAFKA_BOOTSTRAP or KAFKA_DISABLE and ALPACA_POLL_SECONDS environment variables if required (see below)
python3 main.py
$ docker pull drpump/stock-watcher
$ docker run -it -p 8004:8004 -e ALPACA_KEY=${ALPACA_KEY} -e ALPACA_SECRET=${ALPACA_SECRET} -e KAFKA_DISABLE='' drpump/stock-watcher
Suggest running locally first to confirm your credentials and connectivity are OK.
-
Kafka (optional)
-
Install strimzi kubernetes operator for kakfa and create a kafka cluster: see the quickstarts, use cluster name
dev-cluster
if you want to minimise editing of manifests -
kubectl apply -f manifests/kafk-topics.yaml
, modifying cluster name if desired -
Prometheus (recommended but sometimes finicky to install)
-
Install the prometheusstack, suggest using the helm chart
-
kubectl applf -f manifests/stock-watcher
-
Watcher app
-
Create a k8s secret with these credentials:
kubectl create secret generic alpaca-creds --from-literal=ALPACA_KEY=${ALPACA_KEY} --from-literal=ALPACA_SECRET=${ALPACA_SECRET}
-
Edit
manifests/stock-watcher.yaml
and set your STOCK_SYMBOLS, KAFKA_BOOTSTRAP and ALPACA_POLL_INTERVAL environment variables. If not using kafka, add aKAFKA_DISABLE
variable. -
`kubectl apply -f manifests/stock-watcher.yaml
-
To see kafka output (replace boostrap url if required):
kubectl -n kafka run kafka-consumer -ti --image=quay.io/strimzi/kafka:0.40.0-kafka-3.7.0 --rm=true --restart=Never -- bin/kafka-console-consumer.sh --bootstrap-server dev-cluster-kafka-bootstrap:9092 --topic stock-quotes
-
For prometheus/grafana:
-
Open grafana and choose
Explore
-
Choose
prometheus
as the source -
In the query dialogue, set the metric to
quote_bid
and optionally filter by symbol -
Hey presto, you'll get a graph of the bid prices
-
Optionally add another query for the
quote_ask
metric and see them plotted together
The app uses the following environment variables for config:
- STOCK_SYMBOLS
Comma-separated list of symbols to poll/watch, no spaces permitted, default is
RMD,AAPL
(ResMed and Apple) - ALPACA_KEY Your ALPACA API key
- ALPACA_SECRET Your ALPACA API secret
- KAFKA_BOOTSTRAP
Kafka bootstrap host+port for publishing quotes to Kafka. Default is
dev-cluster-kafka-bootstrap.kafka.svc.cluster.local:9092
(strimzi k8s cluster calleddev-cluster
inkafka
namespace) - KAFKA_DISABLE If set to any value, kafka will not be used. Quotes will be printed to stdout in JSON format and price + count metrics will still be accessible on the FastAPI endpoint.
- ALPACA_POLL_SECONDS Polling interval in seconds, default is 60s (same as Alpaca bars interval)
- SERVICE_PORT
Port to use for HTTP serve prometheus metrics and livez/readyz (health) checks. Default is 8004. Retrieve metrics from
http://localhost:${SERVICE_PORT}/metrics/
.
- Alpaca free accounts can only access the IEX exchange. Quotes, bars and trades from this exchange are considerably less frequent than for other exchanges. For example, RMD gets a new bar every 1-5 minutes on average trading days.
- Assuming you run a Prometheus server, you will configure the server to poll the price gauges every N seconds If there is no update since the last poll it gets the same value again. If there were multiple updates, it only gets the last one in the period.
- If you want to better reflect all updates in your analytics, Kafka + Flink will allow you to do near-real-time analytics on all messages the data streams. You could also add the received messages to a time-series database for more static analysis.
- A zero price for ask/bid in the quote data means that there are no sellers/buyers, not that the price is zero, so these values are not added to the corresponding prometheus price guages.
- The trades push feed is currently untested and has no prometheus guages.