Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to use podman with Testcontainers for Go on M1 Mac #23821

Closed
vchandela opened this issue Aug 31, 2024 · 5 comments
Closed

Unable to use podman with Testcontainers for Go on M1 Mac #23821

vchandela opened this issue Aug 31, 2024 · 5 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. macos MacOS (OSX) related remote Problem is in podman-remote

Comments

@vchandela
Copy link

vchandela commented Aug 31, 2024

Issue Description

Unable to use podman with [Testcontainers for Go] on M1 Mac

We're creating an etcd container using Testcontainers

func CreateEtcdContainer() (*EtcdHolder, error) {
	ctx := context.Background()
	req := testcontainers.ContainerRequest{
		Image:        "gcr.io/etcd-development/etcd:v3.5.10",
		WaitingFor:   wait.ForListeningPort("2379"),
		ExposedPorts: []string{"2379/tcp"},
		Env: map[string]string{
			"ETCD_LOG_LEVEL": "debug",
		},
		Cmd: []string{"etcd", "--advertise-client-urls", "http://0.0.0.0:2379", "--listen-client-urls", "http://0.0.0.0:2379",
			"--data-dir", "/tmp/tektite-test-etcd-data"},
	}
	etcdContainer, err := testcontainers.GenericContainer(ctx, testcontainers.GenericContainerRequest{
		ContainerRequest: req,
		ProviderType: testcontainers.ProviderPodman,
	})
	if err != nil {
		return nil, err
	}
	if err := etcdContainer.Start(ctx); err != nil {
		return nil, err
	}
	host, err := etcdContainer.Host(ctx)
	if err != nil {
		return nil, err
	}
	np := nat.Port("2379/tcp")
	port, err := etcdContainer.MappedPort(ctx, np)
	if err != nil {
		return nil, err
	}
	return &EtcdHolder{
		started:   true,
		container: etcdContainer,
		address:   fmt.Sprintf("%s:%d", host, port.Int()),
	}, nil
}
  • podman version
Client:       Podman Engine
Version:      5.2.2
API Version:  5.2.2
Go Version:   go1.23.0
Git Commit:   fcee48106a12dd531702d729d17f40f6e152027f
Built:        Wed Aug 21 23:13:11 2024
OS/Arch:      darwin/amd64

Server:       Podman Engine
Version:      5.2.2
API Version:  5.2.2
Go Version:   go1.22.6
Built:        Wed Aug 21 05:30:00 2024
OS/Arch:      linux/amd64

Steps to reproduce the issue

Steps to reproduce the issue

  1. Uninstalled Docker from Mac M1.
  2. Installed Podman using the following commands:
  • brew install podman
  • podman machine init
  • podman machine start
  1. podman system connection list
Name                         URI                                                         Identity                                                      Default     ReadWrite
podman-machine-default       ssh://core@127.0.0.1:62128/run/user/501/podman/podman.sock  /Users/vishal/.local/share/containers/podman/machine/machine  true        true
podman-machine-default-root  ssh://root@127.0.0.1:62128/run/podman/podman.sock           /Users/vishal/.local/share/containers/podman/machine/machine  false       true
  1. I tried the following links independently but none of them seem to work:

Describe the results you received


**********************************************************************************************
Ryuk has been disabled for the current execution. This can cause unexpected behavior in your environment.
More on this: https://golang.testcontainers.org/features/garbage_collector/
**********************************************************************************************
2024/08/31 10:56:49 github.com/testcontainers/testcontainers-go - Connected to docker: 
  Server Version: 5.2.2
  API Version: 1.41
  Operating System: fedora
  Total Memory: 1957 MB
  Resolved Docker Host: unix:///tmp/podman.sock
  Resolved Docker Socket Path: /tmp/podman.sock
  Test SessionID: e23b090587bf3ddbebb07cd7c493c84d4fc307584a3d03735efaaf15c024b6a0
  Test ProcessID: 9c298245-6117-44b0-ac93-b88f3914d640
  
2024/08/31 10:57:16 🐳 Starting container: 704f86df38e8
2024/08/31 10:57:16 ✅ Container started: 704f86df38e8
2024/08/31 10:57:16 🚧 Waiting for container id 704f86df38e8 image: gcr.io/etcd-development/etcd:v3.5.10. Waiting for: &{Port:2379 timeout:<nil> PollInterval:100ms}
2024/08/31 10:58:16 container logs (Get "http://%2Ftmp%2Fpodman.sock/v1.41/exec/e1102738471b8ce6e37136e887ff1e4d2f3b32420d21f2b4708a87aa13bd3bb8/json": context deadline exceeded, host port waiting failed):
{"level":"info","ts":"2024-08-31T05:27:16.710584Z","caller":"flags/flag.go:113","msg":"recognized and used environment variable","variable-name":"ETCD_LOG_LEVEL","variable-value":"debug"}
{"level":"warn","ts":"2024-08-31T05:27:16.710953Z","caller":"embed/config.go:676","msg":"Running http and grpc server on single port. This is not recommended for production."}
{"level":"info","ts":"2024-08-31T05:27:16.711033Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls","http://0.0.0.0:2379","--listen-client-urls","http://0.0.0.0:2379","--data-dir","/tmp/tektite-test-etcd-data"]}
{"level":"warn","ts":"2024-08-31T05:27:16.711113Z","caller":"embed/config.go:676","msg":"Running http and grpc server on single port. This is not recommended for production."}
{"level":"info","ts":"2024-08-31T05:27:16.711229Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["http://localhost:2380"]}
{"level":"info","ts":"2024-08-31T05:27:16.714102Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["http://0.0.0.0:2379"]}
{"level":"info","ts":"2024-08-31T05:27:16.714297Z","caller":"embed/etcd.go:309","msg":"starting an etcd server","etcd-version":"3.5.10","git-sha":"0223ca52b","go-version":"go1.20.10","go-os":"linux","go-arch":"amd64","max-cpu-set":6,"max-cpu-available":6,"member-initialized":false,"name":"default","data-dir":"/tmp/tektite-test-etcd-data","wal-dir":"","wal-dir-dedicated":"","member-dir":"/tmp/tektite-test-etcd-data/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":100000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["http://localhost:2380"],"listen-peer-urls":["http://localhost:2380"],"advertise-client-urls":["http://0.0.0.0:2379"],"listen-client-urls":["http://0.0.0.0:2379"],"listen-metrics-urls":[],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"default=http://localhost:2380","initial-cluster-state":"new","initial-cluster-token":"etcd-cluster","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":false,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
{"level":"info","ts":"2024-08-31T05:27:16.717318Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/tmp/tektite-test-etcd-data/member/snap/db","took":"2.454164ms"}
{"level":"info","ts":"2024-08-31T05:27:16.71994Z","caller":"etcdserver/raft.go:495","msg":"starting local member","local-member-id":"8e9e05c52164694d","cluster-id":"cdf818194e3a8c32"}
{"level":"info","ts":"2024-08-31T05:27:16.720052Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d switched to configuration voters=()"}
{"level":"info","ts":"2024-08-31T05:27:16.720086Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d became follower at term 0"}
{"level":"info","ts":"2024-08-31T05:27:16.720094Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 8e9e05c52164694d [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]"}
{"level":"info","ts":"2024-08-31T05:27:16.720111Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d became follower at term 1"}
{"level":"info","ts":"2024-08-31T05:27:16.720138Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d switched to configuration voters=(10276657743932975437)"}
{"level":"warn","ts":"2024-08-31T05:27:16.723384Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
{"level":"info","ts":"2024-08-31T05:27:16.726085Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":1}
{"level":"debug","ts":"2024-08-31T05:27:16.726485Z","caller":"etcdserver/server.go:619","msg":"restore consistentIndex","index":0}
{"level":"info","ts":"2024-08-31T05:27:16.727774Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
{"level":"info","ts":"2024-08-31T05:27:16.728961Z","caller":"etcdserver/server.go:854","msg":"starting etcd server","local-member-id":"8e9e05c52164694d","local-server-version":"3.5.10","cluster-version":"to_be_decided"}
{"level":"info","ts":"2024-08-31T05:27:16.729149Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Server #1] Server created"}
{"level":"info","ts":"2024-08-31T05:27:16.72919Z","caller":"etcdserver/server.go:738","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"8e9e05c52164694d","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
{"level":"info","ts":"2024-08-31T05:27:16.729263Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/tmp/tektite-test-etcd-data/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
{"level":"info","ts":"2024-08-31T05:27:16.729519Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/tmp/tektite-test-etcd-data/member/snap","suffix":"snap","max":5,"interval":"30s"}
{"level":"info","ts":"2024-08-31T05:27:16.729542Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/tmp/tektite-test-etcd-data/member/wal","suffix":"wal","max":5,"interval":"30s"}
{"level":"debug","ts":"2024-08-31T05:27:16.730216Z","caller":"etcdserver/server.go:2142","msg":"Applying entries","num-entries":1}
{"level":"debug","ts":"2024-08-31T05:27:16.730378Z","caller":"etcdserver/server.go:2145","msg":"Applying entry","index":1,"term":1,"type":"EntryConfChange"}
{"level":"info","ts":"2024-08-31T05:27:16.730697Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d switched to configuration voters=(10276657743932975437)"}
{"level":"info","ts":"2024-08-31T05:27:16.730997Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"cdf818194e3a8c32","local-member-id":"8e9e05c52164694d","added-peer-id":"8e9e05c52164694d","added-peer-peer-urls":["http://localhost:2380"]}
{"level":"info","ts":"2024-08-31T05:27:16.732725Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"8e9e05c52164694d","initial-advertise-peer-urls":["http://localhost:2380"],"listen-peer-urls":["http://localhost:2380"],"advertise-client-urls":["http://0.0.0.0:2379"],"listen-client-urls":["http://0.0.0.0:2379"],"listen-metrics-urls":[]}
{"level":"info","ts":"2024-08-31T05:27:16.732913Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"127.0.0.1:2380"}
{"level":"info","ts":"2024-08-31T05:27:16.733014Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"127.0.0.1:2380"}
{"level":"info","ts":"2024-08-31T05:27:16.733102Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Server #1 ListenSocket #2] ListenSocket created"}
{"level":"info","ts":"2024-08-31T05:27:17.22082Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d is starting a new election at term 1"}
{"level":"info","ts":"2024-08-31T05:27:17.220874Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d became pre-candidate at term 1"}
{"level":"info","ts":"2024-08-31T05:27:17.220903Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d received MsgPreVoteResp from 8e9e05c52164694d at term 1"}
{"level":"info","ts":"2024-08-31T05:27:17.220915Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d became candidate at term 2"}
{"level":"info","ts":"2024-08-31T05:27:17.22092Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d received MsgVoteResp from 8e9e05c52164694d at term 2"}
{"level":"info","ts":"2024-08-31T05:27:17.220927Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d became leader at term 2"}
{"level":"info","ts":"2024-08-31T05:27:17.220933Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8e9e05c52164694d elected leader 8e9e05c52164694d at term 2"}
{"level":"debug","ts":"2024-08-31T05:27:17.221534Z","caller":"etcdserver/server.go:2142","msg":"Applying entries","num-entries":1}
{"level":"debug","ts":"2024-08-31T05:27:17.221621Z","caller":"etcdserver/server.go:2145","msg":"Applying entry","index":2,"term":2,"type":"EntryNormal"}
{"level":"debug","ts":"2024-08-31T05:27:17.221631Z","caller":"etcdserver/server.go:2204","msg":"apply entry normal","consistent-index":1,"entry-index":2,"should-applyV3":true}
{"level":"info","ts":"2024-08-31T05:27:17.221734Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
{"level":"debug","ts":"2024-08-31T05:27:17.22191Z","caller":"etcdserver/server.go:2142","msg":"Applying entries","num-entries":1}
{"level":"debug","ts":"2024-08-31T05:27:17.221948Z","caller":"etcdserver/server.go:2145","msg":"Applying entry","index":3,"term":2,"type":"EntryNormal"}
{"level":"debug","ts":"2024-08-31T05:27:17.221957Z","caller":"etcdserver/server.go:2204","msg":"apply entry normal","consistent-index":2,"entry-index":3,"should-applyV3":true}
{"level":"debug","ts":"2024-08-31T05:27:17.221975Z","caller":"etcdserver/server.go:2227","msg":"applyEntryNormal","V2request":"ID:7587881093213843458 Method:\"PUT\" Path:\"/0/members/8e9e05c52164694d/attributes\" Val:\"{\\\"name\\\":\\\"default\\\",\\\"clientURLs\\\":[\\\"http://0.0.0.0:2379\\\"]}\" "}
{"level":"info","ts":"2024-08-31T05:27:17.222217Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"8e9e05c52164694d","local-member-attributes":"{Name:default ClientURLs:[http://0.0.0.0:2379]}","request-path":"/0/members/8e9e05c52164694d/attributes","cluster-id":"cdf818194e3a8c32","publish-timeout":"7s"}
{"level":"debug","ts":"2024-08-31T05:27:17.222232Z","caller":"etcdserver/server.go:2142","msg":"Applying entries","num-entries":1}
{"level":"info","ts":"2024-08-31T05:27:17.222254Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"debug","ts":"2024-08-31T05:27:17.222261Z","caller":"etcdserver/server.go:2145","msg":"Applying entry","index":4,"term":2,"type":"EntryNormal"}
{"level":"debug","ts":"2024-08-31T05:27:17.222268Z","caller":"etcdserver/server.go:2204","msg":"apply entry normal","consistent-index":3,"entry-index":4,"should-applyV3":true}
{"level":"debug","ts":"2024-08-31T05:27:17.222274Z","caller":"etcdserver/server.go:2227","msg":"applyEntryNormal","V2request":"ID:7587881093213843460 Method:\"PUT\" Path:\"/0/version\" Val:\"3.5.0\" "}
{"level":"info","ts":"2024-08-31T05:27:17.222288Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cdf818194e3a8c32","local-member-id":"8e9e05c52164694d","cluster-version":"3.5"}
{"level":"info","ts":"2024-08-31T05:27:17.222304Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #3] Channel created"}
{"level":"info","ts":"2024-08-31T05:27:17.222324Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #3] original dial target is: \"0.0.0.0:2379\""}
{"level":"info","ts":"2024-08-31T05:27:17.22233Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2024-08-31T05:27:17.222341Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
{"level":"info","ts":"2024-08-31T05:27:17.222434Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #3] dial target \"0.0.0.0:2379\" parse failed: parse \"0.0.0.0:2379\": first path segment in URL cannot contain colon"}
{"level":"info","ts":"2024-08-31T05:27:17.222431Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2024-08-31T05:27:17.222498Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #3] fallback to scheme \"passthrough\""}
{"level":"info","ts":"2024-08-31T05:27:17.222507Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2024-08-31T05:27:17.222525Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #3] parsed dial target is: {URL:{Scheme:passthrough Opaque: User: Host: Path:/0.0.0.0:2379 RawPath: OmitHost:false ForceQuery:false RawQuery: Fragment: RawFragment:}}"}
{"level":"info","ts":"2024-08-31T05:27:17.222533Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #3] Channel authority set to \"0.0.0.0:2379\""}
{"level":"info","ts":"2024-08-31T05:27:17.222633Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #3] Resolver state updated: {\n  \"Addresses\": [\n    {\n      \"Addr\": \"0.0.0.0:2379\",\n      \"ServerName\": \"\",\n      \"Attributes\": null,\n      \"BalancerAttributes\": null,\n      \"Metadata\": null\n    }\n  ],\n  \"Endpoints\": [\n    {\n      \"Addresses\": [\n        {\n          \"Addr\": \"0.0.0.0:2379\",\n          \"ServerName\": \"\",\n          \"Attributes\": null,\n          \"BalancerAttributes\": null,\n          \"Metadata\": null\n        }\n      ],\n      \"Attributes\": null\n    }\n  ],\n  \"ServiceConfig\": null,\n  \"Attributes\": null\n} (resolver returned new addresses)"}
{"level":"info","ts":"2024-08-31T05:27:17.222715Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #3] Channel switches to new LB policy \"pick_first\""}
{"level":"info","ts":"2024-08-31T05:27:17.222786Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [pick-first-lb 0xc00034aab0] Received new config {\n  \"shuffleAddressList\": false\n}, resolver state {\n  \"Addresses\": [\n    {\n      \"Addr\": \"0.0.0.0:2379\",\n      \"ServerName\": \"\",\n      \"Attributes\": null,\n      \"BalancerAttributes\": null,\n      \"Metadata\": null\n    }\n  ],\n  \"Endpoints\": [\n    {\n      \"Addresses\": [\n        {\n          \"Addr\": \"0.0.0.0:2379\",\n          \"ServerName\": \"\",\n          \"Attributes\": null,\n          \"BalancerAttributes\": null,\n          \"Metadata\": null\n        }\n      ],\n      \"Attributes\": null\n    }\n  ],\n  \"ServiceConfig\": null,\n  \"Attributes\": null\n}"}
{"level":"info","ts":"2024-08-31T05:27:17.2229Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #3 SubChannel #4] Subchannel created"}
{"level":"info","ts":"2024-08-31T05:27:17.222913Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #3] Channel Connectivity change to CONNECTING"}
{"level":"info","ts":"2024-08-31T05:27:17.222964Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #3 SubChannel #4] Subchannel Connectivity change to CONNECTING"}
{"level":"info","ts":"2024-08-31T05:27:17.223014Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #3 SubChannel #4] Subchannel picks a new address \"0.0.0.0:2379\" to connect"}
{"level":"info","ts":"2024-08-31T05:27:17.223063Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Server #5] Server created"}
{"level":"info","ts":"2024-08-31T05:27:17.223179Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [pick-first-lb 0xc00034aab0] Received SubConn state update: 0xc00034ac00, {ConnectivityState:CONNECTING ConnectionError:<nil>}"}
{"level":"info","ts":"2024-08-31T05:27:17.223542Z","caller":"embed/serve.go:187","msg":"serving client traffic insecurely; this is strongly discouraged!","traffic":"grpc+http","address":"[::]:2379"}
{"level":"info","ts":"2024-08-31T05:27:17.223714Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Server #5 ListenSocket #7] ListenSocket created"}
{"level":"info","ts":"2024-08-31T05:27:17.223985Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #3 SubChannel #4] Subchannel Connectivity change to READY"}
{"level":"info","ts":"2024-08-31T05:27:17.224011Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [pick-first-lb 0xc00034aab0] Received SubConn state update: 0xc00034ac00, {ConnectivityState:READY ConnectionError:<nil>}"}
{"level":"info","ts":"2024-08-31T05:27:17.224018Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #3] Channel Connectivity change to READY"}

panic: Get "http://%2Ftmp%2Fpodman.sock/v1.41/exec/e1102738471b8ce6e37136e887ff1e4d2f3b32420d21f2b4708a87aa13bd3bb8/json": context deadline exceeded, host port waiting failed

goroutine 1 [running]:
github.com/spirit-labs/tektite/clustmgr/clitest.TestMain(0xc0002e9c20)
	/Users/vishal/personal/tektite/clustmgr/clitest/client_test.go:20 +0x1bc
main.main()
	_testmain.go:71 +0xa8

Describe the results you expected

I expected to pull the etcd image using podman.

I'm able to pull this image locally but not sure why it's not working in the code:

podman pull gcr.io/etcd-development/etcd:v3.5.10
Trying to pull gcr.io/etcd-development/etcd:v3.5.10...
Getting image source signatures
Copying blob sha256:1e3d9b7d145208fa8fa3ee1c9612d0adaac7255f1bbc9ddea7e461e0b317805c
Copying blob sha256:b02a7525f878e61fc1ef8a7405a2cc17f866e8de222c1c98fd6681aff6e509db
Copying blob sha256:07a64a71e01156f8f99039bc246149925c6d1480d3957de78510bbec6ec68f7a
Copying blob sha256:fe5ca62666f04366c8e7f605aa82997d71320183e99962fa76b3209fdfbb8b58
Copying blob sha256:fcb6f6d2c9986d9cd6a2ea3cc2936e5fc613e09f1af9042329011e43057f3265
Copying blob sha256:e8c73c638ae9ec5ad70c49df7e484040d889cca6b4a9af056579c3d058ea93f0
Copying blob sha256:4aa0ea1413d37a58615488592a0b827ea4b2e48fa5a77cf707d0e35f025e613f
Copying blob sha256:7c881f9ab25e0d86562a123b5fb56aebf8aa0ddd7d48ef602faf8d1e7cf43d8c
Copying blob sha256:5627a970d25e752d971a501ec7e35d0d6fdcd4a3ce9e958715a686853024794a
Copying blob sha256:2b006b718894b6eeb354b4024c79eab6dbd1e2338b9037a6c44c44629769baa5
Copying blob sha256:8a5e72cbf00e739d8a7a5899049a0cc862ff2e3410b758a309e9dd78ddeab68f
Copying blob sha256:7f9e59aecf0b022ba8cabe2495afaaed776c59ab972c628e96f298f78046603a
Copying blob sha256:eb7b8ea1c6db88b824aafb5a364bc672a1a07b3bafa676447a4b05137024b884
Copying blob sha256:e0c00b84eb0c2bd3e3ecb7930c2b38a628917ec47589f1ef813aa69fe370cb8f
Copying config sha256:42d0b9aa7106dcc026db0ff56fe08c188c43d51d7887eb8b90a5b187835a9aa8
Writing manifest to image destination
42d0b9aa7106dcc026db0ff56fe08c188c43d51d7887eb8b90a5b187835a9aa8
image

podman info output

host:
  arch: amd64
  buildahVersion: 1.37.2
  cgroupControllers:
  - cpu
  - io
  - memory
  - pids
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon-2.1.10-1.fc40.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.10, commit: '
  cpuUtilization:
    idlePercent: 99.55
    systemPercent: 0.24
    userPercent: 0.21
  cpus: 6
  databaseBackend: sqlite
  distribution:
    distribution: fedora
    variant: coreos
    version: "40"
  eventLogger: journald
  freeLocks: 2048
  hostname: localhost.localdomain
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 1000000
    uidmap:
    - container_id: 0
      host_id: 501
      size: 1
    - container_id: 1
      host_id: 100000
      size: 1000000
  kernel: 6.9.12-200.fc40.x86_64
  linkmode: dynamic
  logDriver: journald
  memFree: 1638600704
  memTotal: 2052587520
  networkBackend: netavark
  networkBackendInfo:
    backend: netavark
    dns:
      package: aardvark-dns-1.12.1-1.20240819115418474394.main.6.gc2cd0be.fc40.x86_64
      path: /usr/libexec/podman/aardvark-dns
      version: aardvark-dns 1.13.0-dev
    package: netavark-1.12.1-1.20240819170533312370.main.26.g4358fd3.fc40.x86_64
    path: /usr/libexec/podman/netavark
    version: netavark 1.13.0-dev
  ociRuntime:
    name: crun
    package: crun-1.16-1.20240813143753154884.main.16.g26c7687.fc40.x86_64
    path: /usr/bin/crun
    version: |-
      crun version UNKNOWN
      commit: 1c1550ad8b233275d6ef04d60003b3c59bf42d71
      rundir: /run/user/501/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +LIBKRUN +WASM:wasmedge +YAJL
  os: linux
  pasta:
    executable: /usr/bin/pasta
    package: passt-0^20240726.g57a21d2-1.fc40.x86_64
    version: |
      pasta 0^20240726.g57a21d2-1.fc40.x86_64
      Copyright Red Hat
      GNU General Public License, version 2 or later
        <https://www.gnu.org/licenses/old-licenses/gpl-2.0.html>
      This is free software: you are free to change and redistribute it.
      There is NO WARRANTY, to the extent permitted by law.
  remoteSocket:
    exists: true
    path: /run/user/501/podman/podman.sock
  rootlessNetworkCmd: pasta
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: true
  serviceIsRemote: true
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.2.2-2.fc40.x86_64
    version: |-
      slirp4netns version 1.2.2
      commit: 0ee2d87523e906518d34a6b423271e4826f71faf
      libslirp: 4.7.0
      SLIRP_CONFIG_VERSION_MAX: 4
      libseccomp: 2.5.5
  swapFree: 0
  swapTotal: 0
  uptime: 0h 15m 56.00s
  variant: ""
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries:
  search:
  - docker.io
store:
  configFile: /var/home/core/.config/containers/storage.conf
  containerStore:
    number: 0
    paused: 0
    running: 0
    stopped: 0
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /var/home/core/.local/share/containers/storage
  graphRootAllocated: 106769133568
  graphRootUsed: 4243976192
  graphStatus:
    Backing Filesystem: xfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Supports shifting: "false"
    Supports volatile: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 0
  runRoot: /run/user/501/containers
  transientStore: false
  volumePath: /var/home/core/.local/share/containers/storage/volumes
version:
  APIVersion: 5.2.2
  Built: 1724198400
  BuiltTime: Wed Aug 21 05:30:00 2024
  GitCommit: ""
  GoVersion: go1.22.6
  Os: linux
  OsArch: linux/amd64
  Version: 5.2.2


### Podman in a container

No

### Privileged Or Rootless

Rootless

### Upstream Latest Release

Yes

### Additional environment details

### Additional information

Additional information like issue happens only occasionally or issue happens with a particular architecture or on a particular setting
@vchandela vchandela added the kind/bug Categorizes issue or PR as related to a bug. label Aug 31, 2024
@github-actions github-actions bot added macos MacOS (OSX) related remote Problem is in podman-remote labels Aug 31, 2024
@Luap99
Copy link
Member

Luap99 commented Sep 2, 2024

So what exactly is the podman issue here? host port waiting failed is not a error message in our code base AFAICT

Did you check that if your created container is running and has the port exposed in the listing?

@vchandela
Copy link
Author

vchandela commented Sep 9, 2024

So what exactly is the podman issue here? host port waiting failed is not a error message in our code base AFAICT

Did you check that if your created container is running and has the port exposed in the listing?

@Luap99

  • I'm trying to initiate this test-container for Redis in my M1-mac: https://golang.testcontainers.org/quickstart/

    • It works completely fine with Docker.
    • A container is created for testcontainers/ryuk and image is pulled for Redis.
  • With podman, it throws this error:

/Users/vishal/personal/random/temp/cmd/main_test.go:25: Could not start redis: create container: started hook: wait until ready: unexpected container status "stopped": could not start container: creating reaper failed
  • I also tried setting ProviderType: testcontainers.ProviderPodman, but this issue still persisted.
  • By reaper they mean testcontainers/ryuk. The image is pulled correctly but the container is not created properly.
image
  • If I disable the reaper using os.Setenv("TESTCONTAINERS_RYUK_DISABLED", "true"), the test works fine. However, this is a band-aid solution and not a permanent fix.

In this link:https://golang.testcontainers.org/system_requirements/using_podman/#podman-socket-activation
it is mentioned that we need to activate podman's socket but this is only for Linux. There are no similar steps for Mac. This can also be a reason.

@purplefox
Copy link

purplefox commented Sep 9, 2024

Note, google search shows this seems to be a common issue, e.g.

https://engineering.zalando.com/posts/2023/12/using-modules-for-testcontainers-with-golang.html

One way to "fix this" is to deactivate it with the environment variable TESTCONTAINERS_RYUK_DISABLED=true.

Another way is to set the Podman machine rootful and add:

export TESTCONTAINERS_RYUK_CONTAINER_PRIVILEGED=true; # needed to run Reaper (alternative disable it TESTCONTAINERS_RYUK_DISABLED=true)
export TESTCONTAINERS_DOCKER_SOCKET_OVERRIDE=/var/run/docker.sock; # needed to apply the bind with statfs
In our internal library we took the approach of disabling it by default as developers had issues running it locally.

Seems like podman and ryuk don't get along together?

@vchandela
Copy link
Author

Seems like podman and ryuk don't get along together?

@purplefox
Yeah. I'm also trying to reach out to Testcontainers folks on slack: https://testcontainers.slack.com/archives/CBWBNR76G/p1725862184199349?thread_ts=1725862184.199349&cid=CBWBNR76G

I don't want to disable ryuk for podman when it works in case of docker.

@vchandela
Copy link
Author

Sharing updates here:

  • I think Podman does not have a bridge network, which is the one Ryuk uses under the hood. So, we'll have to disable ryuk. However, testcontainers-go provides a Terminate() that can be used instead of Stop() to remove the dependency on ryuk. Also, we run tests using Github Actions which are not persistent so ryuk isn't compulsory.

  • This issue was not with Podman but rather with Docker

    This issue can be closed now. Thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. macos MacOS (OSX) related remote Problem is in podman-remote
Projects
None yet
Development

No branches or pull requests

6 participants
@purplefox @Luap99 @vchandela and others