-
Notifications
You must be signed in to change notification settings - Fork 921
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Do you use MetalLB? Tell us! #5
Comments
We're not using it yet, but our K8S clusters are 100% metallic, running in our data centers and on customer premises. We're going to be trying this out in the new year. Our deployment would in most cases have this peer directly with our routers. We're a mostly Cisco shop. Which BGP implementations have you tested with? |
So far, I've only tested with OSS BGP implementations, mainly BIRD. I used to have some Juniper and Cisco hardware that could speak BGP, but I got rid of them :(. With that said, currently MetalLB speaks a very conventional, "has existed for ever" dialect of BGP, so it should work fine. I foresee two possible issues interacting with Cisco's implementation:
If you hit any interop problems, please file bugs! Ideally with a pcap that captures the initial exchange of BGP OPEN messages and a few UPDATE messages, so that I can add regression tests. |
A colleague brought my attention to this project recently. We're running bare metal k8s on bgp enabled hosts and pods so this looks very interesting. Unfortunately we are using 32-bit AS numbers in our setup so we can't test it right away but it looks promising! |
@xnaveira MetalLB supports 32-bit AS numbers! It's the one BGP extension that's implemented in v0.1.0. v0.2.0 will also support MP-BGP, so if you use Quagga or Cisco devices, you'll want to wait for 0.2.0. |
That is awesome @danderson , i'll give it a try asap then. |
I saw a post in reddit.com about MetalLB and this is what I had been looking for a long time for my local K8S on Mac. Took about one day following the tutorial and experiment, finally I have it up and running. Now I can deploy service in k8s with LoadBalancer without "pending". Awesome. |
We have successfully deployed a patched version of MetalLB on a staging K8S cluster at @naitways, peered with JunOS hardware appliances. Awesome! :) |
\o/ What patching did you need to interface with JunOS? I'm going to spin up a virtual JunOS later today and do interop testing, but it sounds like you already found the issues? |
@danderson happy to support some bare metal and diverse switching (Juniper) + local and global BGP testing at Packet.net. Maybe take advantage of https://github.com/cncf/cluster ? |
We've just deployed MetalLB to our new test env. on a 1.9 K8s and eagerly watching #7 and #83. We've got a use case where any and all pods can be publicly routable so we're looking at having an IPv6 only cluster and each pod/svc getting IPv6. We need non-HTTP svc so planning to use IPs to identify pods when SNI/TLS isn't available (of which we're launching with thousands this summer and expect that to grow to 10s of in 12-18 months.) Aligning some things with K8s 1.10 release for beta IPv6 support and will probably be running alpha versions of stuff in test leading up to launch. |
I just tried this in ARP mode in my k8s-cluster @ home. Works so far, thanks for this great project 👍 |
Just jumping in as well. I'm using ARP mode in my home lab with a cluster I set up following Kubernetes the Hard Way to learn Kubernetes. I'm using Weave Net to handle my cluster networking and running on XenServer VMs. I haven't gotten metallb running correctly yet but I'm working on it. :) |
We are using MetalLB on a 30 node k8x cluster (internally) in production. So far, no issues. |
Just wanted to say thank you for this project! I have spent hours trying to figure out several issues with keepalived-vip before stumbling across MetallLB.. Installed it in 5mins and it just works (and is a more elegant approach, too). Time will tell how stable it is, but no issues whatsoever so far! |
Running it on both a 3 OrangePI PC2 cluster and a 3 VM CoreOS cluster with k8s 1.9. |
Hi, we will use this when we will launch our ongoing k8s project. |
This project is lovely. I'm using it on my home cluster, and plan to use it on my cluster at work. This project makes k8s fit into my org. |
<3 |
Using this project on my homelab to facilitate Ansible configuration of single container pods is a breeze. Amazing work! |
We are using this to try out Kubernetes on our own hardware. Thanks for creating it. |
Running it against SRX and MX juniper hardware. Thanks for making it! |
I am evaluating I have learned a lot about Kubernetes by studying |
my company started trying out metallb. I then removed the bgp peering from calico and used it with metallb which finally worked. however it's sad that metallb does not have some kind of LVS way of attaching IPs in layer2 mode, which won't use ARP requests, so it would be useful and less error prone in most networks. |
I'm getting my feet wet in Kubernetes at my company. Metallb has proven really useful for repeatable configuration of Ingresses (using nginx-ingress) on available layer2 IPs. Thanks for this useful software! |
I'm running it at home, in my testing kubernetes cluster (which is running weave). It's connected to my pfSense router via BGP. This setup would be perfect in a datacenter setup with a specific IP space for DMZ. Thanks for an awesome piece of software! |
I am running it in my home-lab and switched to Metallb after using Keepalived-vip for a while and not finding it stable enough. Keepalived-vip would work fine for a while and then lock up, forcing me to manually delete the "master" pod, after which things would start working again. I hope my worries are over now, using Metallb. |
I've been developing a bare metal k8s cluster at our office mostly for both educational purposes as well as for running some small internal apps. Metal LB in Layer 2 mode was a great solution for me to be able to deploy an ingress controller so I can use ingresses instead of having to use nodeports. Source repo. |
So I'm the first MetalLB user to leave a comment here in 2023 |
Running metallb on k3s for dns, web proxy, other bs. I just delegated a /26 and that was pretty much most of the work. Honestly shocked how easy it was to get working. I fully expected fussyness when I saw bgp was an option 🐉 but just doing a basic l2 advert just works. I had so few problems I didn't even look at any source haha. That is extremely rare for me, not because of bugs but because the user experience is obscure or fiddly. So yeah out of the box after following the terse, complete docs no issues at all. |
We are looking for specific usecase of BGP advertisements from K8s. We already have a solution for this in monolithic architecture and we are looking for ways to implement it in K8s cluster. Our solution is an Application Firewall/IDS that is being deployed in transport networks. We are interested in filtering traffic that runs on specific port (let's say UDP 1234). In our monolith deployment we are using BGP Flowspec plugin for that. To make the long story short, whenever our customer suspects that traffic on port 1234 from/to specific IP network is subject to external attacks, we are able to start advertising following FlowSpec rules to his network: This way we are able to filter specific traffic transparently. As we are thinking of going cloud native with our solution, however FlowSpec in k8s seems to be our greatest concern. Also of anyone has any suggestions on how to resolve our problem in k8s, I will be very happy to hear them! |
I've been using it with pfsense FRR BGP and a K3s cluster on VMs. Really easy to set up and use but running into an issue that is making BGP mode quite the pain |
I'm running metallb on a ROCK64 k3s cluster. So far, it has worked flawlessly. |
I'm using MetalLB for my home lab cluster, consisting on discarded machines that family and friends "donated". |
My team uses MetalLB on Rancher-provisioned RKE1 clusters running within our on-premise VMWare environment. Our use case for MetalLB is to be able to configure our nginx ingress controller with a Service type |
Very thankful for this project. Running MetalLB on Rancher Kubernetes Engine 2 (RKE2) in an on-premises virtualised environment. Calico CNI, BGP route advertisement with core switches. |
Running MetalLB on Kubernetes in production and homelab, with BGP mode. |
Running Metallb in various homelab environments. Quite happy about it, thanks :-) |
I am running Metallb on k3s deployed on DO droplets. L2 mode works quite well! |
Not using it yet... but looking at using commodity (cheap) old desktop hardware for cheap bare metal k8s home lab stuff to basically build personal private hosting. |
sending warm fuzzy feelings from a warm and fuzzy server rack in my flat. thanks for this project. |
Running MetalLB on Kubernetes in productions with L2Advertisement and IPAddressPool . |
Running MetalLB on k8s in multiple prod and pre-prod environments for various Automotive Tier 1 with L2 Advertisement and IPAddressPool for more than 2 years. |
MetalLB is great!. It's one of the very first thing I looked for and implemented when I started playing with my homelab kubernetes cluster. A couple years back I was using it to directly expose my services and these days I use metallb together with gateway (nginx gateway fabric). |
Hi! We are https://www.emsysgrid.com and I just finished evaluating MetalLB. We are planning to use it for many of our K3s clusters (both internal and on-premise) where external load-balancers are not available. Many of our clusters are "on the edge" and (mostly) air-gapped. I previously evaluated https://kube-vip.io for this purpose, but unfortunately I wasn't able make it work reliably. I don't mean to discredit kube-vip at all - I am sure it works great for many people! But MetalLB was able to handle the simulated failures that I tested it with very gracefully, which impressed me a lot! We are only using MetalLB in L2-mode. We deploy it with Kustomize and some Kustomize-patches to add The only thing that annoyed me a little bit is that MetalLB needs an additional port between the nodes of the cluster for the memberlist component. Open ports are hard to come by on systems we don't management ourselves, so this will need some communication with our customers. Kube-vip uses https://pkg.go.dev/k8s.io/client-go/tools/leaderelection and is able to avoid any additional port requirements this way. I understand that MetalLB does not elect a leader, but I still wonder if there could be a kubernetes-native alternative to the current memberlist implementation. Thanks to all the maintainers of this beautiful project! |
Using it on my microk8s and my Turing pi cluster. |
Absolutely amazing product. I tried to implement KubeVIP as my service LB before and it was a nightmare, never did get it working properly. Shout out to @timothystewart6 (Techno Tim) for getting me on board with it! And HUGE shout out to the MetalLB team, this is awesome!!! I am using it in my professional career, and my personal! Keep up the awesome work! Would def sponsor this project if it had a sponsor option. Instead I will star and keep an eye on issues and contribute. |
I've been using MetalLB for six years on many different projects. Anytime, I have a cluster on-prem or in some cloud that does not have it's own LB offering MetalLB is the first choice. I, also, build clusters entirely in RAM (storage too) daily just to test things without wasting SSD writes using a tool I wrote kubash. During this entire time MetalLB has been one of the most stable and maintenance free aspects of the entire k8s universe. |
I love MetalLB!!!! I want to say thank you so much for this incredible creation. I am running it at home where I am building an Internal Developer Platform on 3 Pi 4s and a Synology. I am writing a Blog series on to help others set this up too. |
Running MetalLB in production in our company for 1 year using L2 advertisement. No issues, i hope it will be many more years. Thank you for this amazing product. |
Hello! |
Been using MetalLB as part of my default kubernetes stack that is deployed in openstack for 3+ years. It has performed amazing. |
I had a great time setting up MetalLB. I have been using it for over two years on my server rack and recently added two more turing pis and a couple jetsons to my on prem setup and it has been a breeze. Thank you for supporting this project! |
i read this comments and i started to use metallb about 3 months ago in production , |
I'm using it in my bare metal openshift 4.17 cluster for about 3 months now and love it! I use it in conjunction with a mikroTik router running routerOS 7.16.2. The only issue I have come across is a port conflict with the node feature discovery operator when you want to install nvidia gpus. This can be worked around by modifying the ffr-k8s daemonset to use a port other than 8081 for health checks. I changed it to 8085 and all the pods came back up. It would be nice to have this port configurable in the metalLB CR. Thanks for all of your efforts on this great project!! |
fwiw, the problem is already fixed upstream: metallb/frr-k8s#224 |
Use and love - thanks all for building something very useful and stable |
This is not an issue so much as a lightweight way of gathering information on who is using MetalLB. This is mostly to satisfy our curiosity, but might also help us decide how to evolve the project.
So, if you use MetalLB for something, please chime in here and tell us more!
The text was updated successfully, but these errors were encountered: