Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Do you use MetalLB? Tell us! #5

Open
danderson opened this issue Nov 28, 2017 · 242 comments
Open

Do you use MetalLB? Tell us! #5

danderson opened this issue Nov 28, 2017 · 242 comments
Labels

Comments

@danderson
Copy link
Contributor

This is not an issue so much as a lightweight way of gathering information on who is using MetalLB. This is mostly to satisfy our curiosity, but might also help us decide how to evolve the project.

So, if you use MetalLB for something, please chime in here and tell us more!

@rsanders
Copy link

rsanders commented Dec 1, 2017

We're not using it yet, but our K8S clusters are 100% metallic, running in our data centers and on customer premises. We're going to be trying this out in the new year.

Our deployment would in most cases have this peer directly with our routers. We're a mostly Cisco shop. Which BGP implementations have you tested with?

@danderson
Copy link
Contributor Author

So far, I've only tested with OSS BGP implementations, mainly BIRD. I used to have some Juniper and Cisco hardware that could speak BGP, but I got rid of them :(.

With that said, currently MetalLB speaks a very conventional, "has existed for ever" dialect of BGP, so it should work fine. I foresee two possible issues interacting with Cisco's implementation:

  • Some old Cisco devices use an unusual encoding for capability advertisements in the OPEN message. AFAICT, they stopped doing this in the late 1990s, so it shouldn't be a problem. If it is a problem, it's a trivial patch to fix it.
  • Cisco may refuse to speak to a BGPv4 speaker that uses the original route encoding for IPv4 (which the RFCs say is still 100% correct), and instead requires all route advertisements to use the MP-BGP encoding. I'm in the middle of making MetalLB speak MP-BGP for dual-stack support anyway, so hopefully by the time you test MetalLB this should not be a problem.

If you hit any interop problems, please file bugs! Ideally with a pcap that captures the initial exchange of BGP OPEN messages and a few UPDATE messages, so that I can add regression tests.

@xnaveira
Copy link
Contributor

xnaveira commented Dec 4, 2017

A colleague brought my attention to this project recently. We're running bare metal k8s on bgp enabled hosts and pods so this looks very interesting. Unfortunately we are using 32-bit AS numbers in our setup so we can't test it right away but it looks promising!

@danderson
Copy link
Contributor Author

@xnaveira MetalLB supports 32-bit AS numbers! It's the one BGP extension that's implemented in v0.1.0. v0.2.0 will also support MP-BGP, so if you use Quagga or Cisco devices, you'll want to wait for 0.2.0.

@xnaveira
Copy link
Contributor

xnaveira commented Dec 5, 2017

That is awesome @danderson , i'll give it a try asap then.

@tangjoe
Copy link

tangjoe commented Dec 8, 2017

I saw a post in reddit.com about MetalLB and this is what I had been looking for a long time for my local K8S on Mac. Took about one day following the tutorial and experiment, finally I have it up and running. Now I can deploy service in k8s with LoadBalancer without "pending". Awesome.

@halfa
Copy link

halfa commented Dec 8, 2017

We have successfully deployed a patched version of MetalLB on a staging K8S cluster at @naitways, peered with JunOS hardware appliances. Awesome! :)

@danderson
Copy link
Contributor Author

\o/ What patching did you need to interface with JunOS? I'm going to spin up a virtual JunOS later today and do interop testing, but it sounds like you already found the issues?

@jacobsmith928
Copy link

@danderson happy to support some bare metal and diverse switching (Juniper) + local and global BGP testing at Packet.net. Maybe take advantage of https://github.com/cncf/cluster ?

@zcourts
Copy link

zcourts commented Jan 14, 2018

We've just deployed MetalLB to our new test env. on a 1.9 K8s and eagerly watching #7 and #83. We've got a use case where any and all pods can be publicly routable so we're looking at having an IPv6 only cluster and each pod/svc getting IPv6.

We need non-HTTP svc so planning to use IPs to identify pods when SNI/TLS isn't available (of which we're launching with thousands this summer and expect that to grow to 10s of in 12-18 months.) Aligning some things with K8s 1.10 release for beta IPv6 support and will probably be running alpha versions of stuff in test leading up to launch.
FWIW we use OVH and each server comes with a /64 IPv6 block so when this is in, being able to draw on that pool from each K8s compute node will be ideal. As it stands we have no Go lang expertise but if we can contrib in any other way do let us know. We're comfy with C++/Java/Scala and I'll probably be learning Go this yr since we're committed to K8s.

@hameno
Copy link

hameno commented Jan 31, 2018

I just tried this in ARP mode in my k8s-cluster @ home. Works so far, thanks for this great project 👍
I may also deploy this at work in the future for an internal development cluster.

@aphistic
Copy link

Just jumping in as well. I'm using ARP mode in my home lab with a cluster I set up following Kubernetes the Hard Way to learn Kubernetes. I'm using Weave Net to handle my cluster networking and running on XenServer VMs. I haven't gotten metallb running correctly yet but I'm working on it. :)

@pfcurtis
Copy link

We are using MetalLB on a 30 node k8x cluster (internally) in production. So far, no issues.

@pehlert
Copy link

pehlert commented Feb 22, 2018

Just wanted to say thank you for this project! I have spent hours trying to figure out several issues with keepalived-vip before stumbling across MetallLB.. Installed it in 5mins and it just works (and is a more elegant approach, too). Time will tell how stable it is, but no issues whatsoever so far!

@ewoutp
Copy link

ewoutp commented Mar 11, 2018

Running it on both a 3 OrangePI PC2 cluster and a 3 VM CoreOS cluster with k8s 1.9.
Works like a charm!
Love the ease of installation.

@ChristofferNicklassonLenswayGroup

Hi, we will use this when we will launch our ongoing k8s project.
and i am also using this on my cluster at home.
just love it :)

@ebauman
Copy link

ebauman commented Mar 27, 2018

This project is lovely.

I'm using it on my home cluster, and plan to use it on my cluster at work.
We have a old-school deployment at my employer which doesn't afford me the flexibility to setup such niceties as BGP (or really any routing protocol that isn't EIGRP). Most everything that we do is layer 2 separated, so I felt left in the dust by people who got to just specify type: LoadBalancer and off they went.

This project makes k8s fit into my org.

@joestafford
Copy link

<3
Using this LB implementation in my home lab to learn k8s and look forward to using it in a project at work!

@szabopaul
Copy link

Using this project on my homelab to facilitate Ansible configuration of single container pods is a breeze. Amazing work!

@mpackard
Copy link

We are using this to try out Kubernetes on our own hardware. Thanks for creating it.

@fsdaniel
Copy link

fsdaniel commented Apr 18, 2018

Running it against SRX and MX juniper hardware.

Thanks for making it!

@uablrek
Copy link
Contributor

uablrek commented Apr 19, 2018

I am evaluating metallb for use in "bare-VM" rather than bare-metal. There is however no difference, the problems are the same.
For testing I "install" metallb by starting the 2 binaries directly on the node vms, i.e not in pods. Due to the simple and elegant architecture this works perfectly fine.

I have learned a lot about Kubernetes by studying metallb. Thanks!

@schmitch
Copy link

schmitch commented Apr 25, 2018

my company started trying out metallb.
we first tried to use it via layer 2 (we use calicoctl and had configured a bgp peer (that was not in use))
however only one client could connect to the service, we had no idea, why. maybe ARP packets were filtered somewhere.

I then removed the bgp peering from calico and used it with metallb which finally worked.
it's really cool to have a metallb.

however it's sad that metallb does not have some kind of LVS way of attaching IPs in layer2 mode, which won't use ARP requests, so it would be useful and less error prone in most networks.

@michaelfig
Copy link

I'm getting my feet wet in Kubernetes at my company. Metallb has proven really useful for repeatable configuration of Ingresses (using nginx-ingress) on available layer2 IPs. Thanks for this useful software!

@FireDrunk
Copy link
Contributor

I'm running it at home, in my testing kubernetes cluster (which is running weave). It's connected to my pfSense router via BGP. This setup would be perfect in a datacenter setup with a specific IP space for DMZ.
If anyone wants the configs, I'd be happy to share.

Thanks for an awesome piece of software!

@nrbrt
Copy link

nrbrt commented May 7, 2018

I am running it in my home-lab and switched to Metallb after using Keepalived-vip for a while and not finding it stable enough. Keepalived-vip would work fine for a while and then lock up, forcing me to manually delete the "master" pod, after which things would start working again. I hope my worries are over now, using Metallb.

@LeadingMoominExpert
Copy link

I've been developing a bare metal k8s cluster at our office mostly for both educational purposes as well as for running some small internal apps. Metal LB in Layer 2 mode was a great solution for me to be able to deploy an ingress controller so I can use ingresses instead of having to use nodeports. Source repo.

@otischan
Copy link

otischan commented Jan 12, 2023

So I'm the first MetalLB user to leave a comment here in 2023

@elebertus
Copy link

Running metallb on k3s for dns, web proxy, other bs. I just delegated a /26 and that was pretty much most of the work. Honestly shocked how easy it was to get working. I fully expected fussyness when I saw bgp was an option 🐉 but just doing a basic l2 advert just works.

I had so few problems I didn't even look at any source haha. That is extremely rare for me, not because of bugs but because the user experience is obscure or fiddly. So yeah out of the box after following the terse, complete docs no issues at all.

@gregsulek
Copy link

We are looking for specific usecase of BGP advertisements from K8s. We already have a solution for this in monolithic architecture and we are looking for ways to implement it in K8s cluster.

Our solution is an Application Firewall/IDS that is being deployed in transport networks. We are interested in filtering traffic that runs on specific port (let's say UDP 1234).

In our monolith deployment we are using BGP Flowspec plugin for that. To make the long story short, whenever our customer suspects that traffic on port 1234 from/to specific IP network is subject to external attacks, we are able to start advertising following FlowSpec rules to his network:
Condition:
src IP: X.X.X.X/Y port 1234 OR
dst IP: X.X.X.X/Y port 1234
Action
redirect to our FW IP (same as src IP of our router) or leak to other VRF.

This way we are able to filter specific traffic transparently.

As we are thinking of going cloud native with our solution, however FlowSpec in k8s seems to be our greatest concern.
MetalLB does not support it as a matter of fact, however Frrr does support FlowSpec. Are there any plans to include it in future releases?

Also of anyone has any suggestions on how to resolve our problem in k8s, I will be very happy to hear them!

@pchang388
Copy link

I've been using it with pfsense FRR BGP and a K3s cluster on VMs. Really easy to set up and use but running into an issue that is making BGP mode quite the pain

@Fumeaux
Copy link

Fumeaux commented Apr 10, 2023

I'm running metallb on a ROCK64 k3s cluster. So far, it has worked flawlessly.

@ngeorger
Copy link

I'm using MetalLB for my home lab cluster, consisting on discarded machines that family and friends "donated".
Now there is a macbook air, a compaq and a rpi4, working amazing with ubuntu server, k3s.
Thank you very much from Puerto Montt, CHile.

@sgurnick
Copy link

My team uses MetalLB on Rancher-provisioned RKE1 clusters running within our on-premise VMWare environment. Our use case for MetalLB is to be able to configure our nginx ingress controller with a Service type LoadBalancer. This enables us to funnel all of our application traffic through a single LoadBalancer. This is most helpful for us because our RKE cluster nodes are on a network not exposed to the Internet. We were able to configure NAT rules in our firewall to direct traffic to the MetalLB loadbalancer IP address. So far it is working as expected.

@p-kimberley
Copy link

Very thankful for this project.

Running MetalLB on Rancher Kubernetes Engine 2 (RKE2) in an on-premises virtualised environment. Calico CNI, BGP route advertisement with core switches.

@k4nzdroid
Copy link

Running MetalLB on Kubernetes in production and homelab, with BGP mode.

@leffen
Copy link

leffen commented Nov 5, 2023

Running Metallb in various homelab environments. Quite happy about it, thanks :-)

@willagc
Copy link

willagc commented Nov 20, 2023

I am running Metallb on k3s deployed on DO droplets. L2 mode works quite well!

@djbrettbsu
Copy link

Not using it yet... but looking at using commodity (cheap) old desktop hardware for cheap bare metal k8s home lab stuff to basically build personal private hosting.

@abctaylor
Copy link

sending warm fuzzy feelings from a warm and fuzzy server rack in my flat. thanks for this project.

@pr07pr07
Copy link

Running MetalLB on Kubernetes in productions with L2Advertisement and IPAddressPool .

@ranjitrajan
Copy link

Running MetalLB on k8s in multiple prod and pre-prod environments for various Automotive Tier 1 with L2 Advertisement and IPAddressPool for more than 2 years.

@pkkudo
Copy link

pkkudo commented Feb 28, 2024

MetalLB is great!. It's one of the very first thing I looked for and implemented when I started playing with my homelab kubernetes cluster. A couple years back I was using it to directly expose my services and these days I use metallb together with gateway (nginx gateway fabric).

@ChristianCiach
Copy link

ChristianCiach commented Mar 26, 2024

Hi! We are https://www.emsysgrid.com and I just finished evaluating MetalLB. We are planning to use it for many of our K3s clusters (both internal and on-premise) where external load-balancers are not available. Many of our clusters are "on the edge" and (mostly) air-gapped.

I previously evaluated https://kube-vip.io for this purpose, but unfortunately I wasn't able make it work reliably. I don't mean to discredit kube-vip at all - I am sure it works great for many people! But MetalLB was able to handle the simulated failures that I tested it with very gracefully, which impressed me a lot!

We are only using MetalLB in L2-mode. We deploy it with Kustomize and some Kustomize-patches to add --lb-class and some node tolerations.

The only thing that annoyed me a little bit is that MetalLB needs an additional port between the nodes of the cluster for the memberlist component. Open ports are hard to come by on systems we don't management ourselves, so this will need some communication with our customers. Kube-vip uses https://pkg.go.dev/k8s.io/client-go/tools/leaderelection and is able to avoid any additional port requirements this way. I understand that MetalLB does not elect a leader, but I still wonder if there could be a kubernetes-native alternative to the current memberlist implementation.

Thanks to all the maintainers of this beautiful project!

@boulderbit
Copy link

Using it on my microk8s and my Turing pi cluster.

@agluck91
Copy link

Absolutely amazing product. I tried to implement KubeVIP as my service LB before and it was a nightmare, never did get it working properly.

Shout out to @timothystewart6 (Techno Tim) for getting me on board with it! And HUGE shout out to the MetalLB team, this is awesome!!! I am using it in my professional career, and my personal! Keep up the awesome work! Would def sponsor this project if it had a sponsor option.

Instead I will star and keep an eye on issues and contribute.

@joshuacox
Copy link

joshuacox commented May 23, 2024

I've been using MetalLB for six years on many different projects. Anytime, I have a cluster on-prem or in some cloud that does not have it's own LB offering MetalLB is the first choice. I, also, build clusters entirely in RAM (storage too) daily just to test things without wasting SSD writes using a tool I wrote kubash. During this entire time MetalLB has been one of the most stable and maintenance free aspects of the entire k8s universe.

@sachajw
Copy link

sachajw commented Jul 16, 2024

I love MetalLB!!!! I want to say thank you so much for this incredible creation. I am running it at home where I am building an Internal Developer Platform on 3 Pi 4s and a Synology. I am writing a Blog series on to help others set this up too.
https://ortelius.io/blog/2024/04/05/how-to-bake-an-ortelius-pi-part-1-the-hardware/

@mario-madrid
Copy link

Running MetalLB in production in our company for 1 year using L2 advertisement. No issues, i hope it will be many more years. Thank you for this amazing product.

@Nyralei
Copy link

Nyralei commented Jul 28, 2024

Hello!
We are running MetalLB in production in our company for almost 3 years, using L2 and BGP advertisement.

@robkooper
Copy link

Been using MetalLB as part of my default kubernetes stack that is deployed in openstack for 3+ years. It has performed amazing.

@csawtelle
Copy link

I had a great time setting up MetalLB. I have been using it for over two years on my server rack and recently added two more turing pis and a couple jetsons to my on prem setup and it has been a breeze. Thank you for supporting this project!

@kooroshkdt2
Copy link

i read this comments and i started to use metallb about 3 months ago in production ,
i cant describe the assured feeling it gave us , and already 4 more companies start using it :)
dont tell any body we already removed F5 Loadbalancer from setup :) WAF license was too expensive...so you can understand...
thanks for great project

@davtur
Copy link

davtur commented Dec 4, 2024

I'm using it in my bare metal openshift 4.17 cluster for about 3 months now and love it! I use it in conjunction with a mikroTik router running routerOS 7.16.2. The only issue I have come across is a port conflict with the node feature discovery operator when you want to install nvidia gpus. This can be worked around by modifying the ffr-k8s daemonset to use a port other than 8081 for health checks. I changed it to 8085 and all the pods came back up. It would be nice to have this port configurable in the metalLB CR. Thanks for all of your efforts on this great project!!

@fedepaol
Copy link
Member

fedepaol commented Dec 4, 2024

I'm using it in my bare metal openshift 4.17 cluster for about 3 months now and love it! I use it in conjunction with a mikroTik router running routerOS 7.16.2. The only issue I have come across is a port conflict with the node feature discovery operator when you want to install nvidia gpus. This can be worked around by modifying the ffr-k8s daemonset to use a port other than 8081 for health checks. I changed it to 8085 and all the pods came back up. It would be nice to have this port configurable in the metalLB CR. Thanks for all of your efforts on this great project!!

fwiw, the problem is already fixed upstream: metallb/frr-k8s#224

@theblinkingusb
Copy link

Use and love - thanks all for building something very useful and stable

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests