Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Scope 0.11.1 doesn't track fastdp connections from weave 1.4 #782

Closed
2opremio opened this issue Dec 17, 2015 · 13 comments
Closed

Scope 0.11.1 doesn't track fastdp connections from weave 1.4 #782

2opremio opened this issue Dec 17, 2015 · 13 comments

Comments

@2opremio
Copy link
Contributor

Weave 1.4 includes an intermediate bridge so that conntrack (and in turn Scope) can track connections made in Weave through fast data path ( weaveworks/weave#1712 )

However, the demo in the weave ECS guide doesn't show connections between httpservers and data producers (which happen with weave), only between the internet and httpservers:

screen shot 2015-12-17 at 15 54 03

@2opremio
Copy link
Contributor Author

Related: weaveworks/weave#1577

@awh
Copy link
Contributor

awh commented Dec 17, 2015

I have repeated the original test of the conntrack fix with Scope 0.11.1 and Net 1.4.0:

scope-0 11 1-short-lived

This test was performed using two Ubuntu 15.04 VMs (3.19.0-23-generic) provisioned from weave/test/Vagrantfile and uses the reproduction script provided in weaveworks/weave#1577. Short lived connections are visible as expected and conntrack events + state are available on the host.

@tomwilkie
Copy link
Contributor

Thanks @awh. Just to confirm this is in fact working, could you try with scope launch --probe.conntrack false?

@2opremio pls grab a report if the hosts are still about.

@2opremio
Copy link
Contributor Author

Here's the report: https://gist.github.com/2opremio/e76005d3f8c642f36f05

The machines are running a recent kernel

$ uname -a
Linux ip-172-31-0-11 4.1.13-19.30.amzn1.x86_64 #1 SMP Fri Dec 11 03:42:10 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

And the intermediate bridges seem to be there:

$ ifconfig | grep vethwe-                                                                                                                                                                                                         
vethwe-bridge Link encap:Ethernet  HWaddr FA:D3:F1:17:3D:CD  
vethwe-datapath Link encap:Ethernet  HWaddr 2A:20:98:B2:5F:B6 

@awh
Copy link
Contributor

awh commented Dec 17, 2015

could you try with scope launch --probe.conntrack false?

screenshot from 2015-12-17 16 45 09

@tomwilkie
Copy link
Contributor

Well, that confirms that edge was indeed coming from conntrack.

@awh
Copy link
Contributor

awh commented Dec 17, 2015

the intermediate bridges seem to be there

Does sudo conntrack -E -p tcp yield expected connection events?

@tomwilkie
Copy link
Contributor

@fons you're containers don't have weave IP address (or mac addresses). Did you start scope before weave? Could you post the output of docker logs weavescope please?

@2opremio
Copy link
Contributor Author

Does sudo conntrack -E -p tcp yield expected connection events?

Yep, repeatedly curl'ing an httpserver container from my machine, results in connections to the dataproducers. This is reflected by conntrack.

$ weave status dns | grep httpserver
httpserver   10.32.0.3       d466a21b8789 72:9e:17:8b:87:2b
httpserver   10.40.0.2       7c3a973f73c0 a6:14:ab:cd:af:6f
httpserver   10.44.0.1       225bc820b4c0 ce:4d:a8:89:e2:a5
$ weave status dns | grep dataproducer
dataproducer 10.32.0.2       5186a1435fe5 72:9e:17:8b:87:2b
dataproducer 10.40.0.1       f0332ea4cb41 a6:14:ab:cd:af:6f
dataproducer 10.44.0.0       42840fd5d7b6 ce:4d:a8:89:e2:a5
$ sudo yum install conntrack
[...]
$ sudo conntrack -E -p tcp -d 10.32.0.2
    [NEW] tcp      6 120 SYN_SENT src=10.40.0.2 dst=10.32.0.2 sport=38739 dport=4540 [UNREPLIED] src=10.32.0.2 dst=10.40.0.2 sport=4540 dport=38739
 [UPDATE] tcp      6 60 SYN_RECV src=10.40.0.2 dst=10.32.0.2 sport=38739 dport=4540 src=10.32.0.2 dst=10.40.0.2 sport=4540 dport=38739
 [UPDATE] tcp      6 432000 ESTABLISHED src=10.40.0.2 dst=10.32.0.2 sport=38739 dport=4540 src=10.32.0.2 dst=10.40.0.2 sport=4540 dport=38739 [ASSURED]
 [UPDATE] tcp      6 120 FIN_WAIT src=10.40.0.2 dst=10.32.0.2 sport=38739 dport=4540 src=10.32.0.2 dst=10.40.0.2 sport=4540 dport=38739 [ASSURED]
 [UPDATE] tcp      6 60 CLOSE_WAIT src=10.40.0.2 dst=10.32.0.2 sport=38739 dport=4540 src=10.32.0.2 dst=10.40.0.2 sport=4540 dport=38739 [ASSURED]
 [UPDATE] tcp      6 30 LAST_ACK src=10.40.0.2 dst=10.32.0.2 sport=38739 dport=4540 src=10.32.0.2 dst=10.40.0.2 sport=4540 dport=38739 [ASSURED]
 [UPDATE] tcp      6 120 TIME_WAIT src=10.40.0.2 dst=10.32.0.2 sport=38739 dport=4540 src=10.32.0.2 dst=10.40.0.2 sport=4540 dport=38739 [ASSURED]
    [NEW] tcp      6 120 SYN_SENT src=10.40.0.2 dst=10.32.0.2 sport=38741 dport=4540 [UNREPLIED] src=10.32.0.2 dst=10.40.0.2 sport=4540 dport=38741
 [UPDATE] tcp      6 60 SYN_RECV src=10.40.0.2 dst=10.32.0.2 sport=38741 dport=4540 src=10.32.0.2 dst=10.40.0.2 sport=4540 dport=38741
 [UPDATE] tcp      6 432000 ESTABLISHED src=10.40.0.2 dst=10.32.0.2 sport=38741 dport=4540 src=10.32.0.2 dst=10.40.0.2 sport=4540 dport=38741 [ASSURED]
 [UPDATE] tcp      6 120 FIN_WAIT src=10.40.0.2 dst=10.32.0.2 sport=38741 dport=4540 src=10.32.0.2 dst=10.40.0.2 sport=4540 dport=38741 [ASSURED]
 [UPDATE] tcp      6 30 LAST_ACK src=10.40.0.2 dst=10.32.0.2 sport=38741 dport=4540 src=10.32.0.2 dst=10.40.0.2 sport=4540 dport=38741 [ASSURED]
 [UPDATE] tcp      6 120 TIME_WAIT src=10.40.0.2 dst=10.32.0.2 sport=38741 dport=4540 src=10.32.0.2 dst=10.40.0.2 sport=4540 dport=38741 [ASSURED]
[DESTROY] tcp      6 src=10.40.0.2 dst=10.32.0.2 sport=38534 dport=4540 src=10.32.0.2 dst=10.40.0.2 sport=4540 dport=38534 [ASSURED]
    [NEW] tcp      6 120 SYN_SENT src=10.40.0.2 dst=10.32.0.2 sport=38747 dport=4540 [UNREPLIED] src=10.32.0.2 dst=10.40.0.2 sport=4540 dport=38747
 [UPDATE] tcp      6 60 SYN_RECV src=10.40.0.2 dst=10.32.0.2 sport=38747 dport=4540 src=10.32.0.2 dst=10.40.0.2 sport=4540 dport=38747
 [UPDATE] tcp      6 432000 ESTABLISHED src=10.40.0.2 dst=10.32.0.2 sport=38747 dport=4540 src=10.32.0.2 dst=10.40.0.2 sport=4540 dport=38747 [ASSURED]
 [UPDATE] tcp      6 120 FIN_WAIT src=10.40.0.2 dst=10.32.0.2 sport=38747 dport=4540 src=10.32.0.2 dst=10.40.0.2 sport=4540 dport=38747 [ASSURED]
 [UPDATE] tcp      6 30 LAST_ACK src=10.40.0.2 dst=10.32.0.2 sport=38747 dport=4540 src=10.32.0.2 dst=10.40.0.2 sport=4540 dport=38747 [ASSURED]
 [UPDATE] tcp      6 120 TIME_WAIT src=10.40.0.2 dst=10.32.0.2 sport=38747 dport=4540 src=10.32.0.2 dst=10.40.0.2 sport=4540 dport=38747 [ASSURED]
[DESTROY] tcp      6 src=10.40.0.2 dst=10.32.0.2 sport=38535 dport=4540 src=10.32.0.2 dst=10.40.0.2 sport=4540 dport=38535 [ASSURED]
^Cconntrack v1.4.0 (conntrack-tools): 21 flow events have been shown.
[ec2-user@ip-172-31-0-11 ~]$ sudo conntrack -E -p tcp -d 10.40.0.1
    [NEW] tcp      6 120 SYN_SENT src=10.40.0.2 dst=10.40.0.1 sport=38185 dport=4540 [UNREPLIED] src=10.40.0.1 dst=10.40.0.2 sport=4540 dport=38185
 [UPDATE] tcp      6 60 SYN_RECV src=10.40.0.2 dst=10.40.0.1 sport=38185 dport=4540 src=10.40.0.1 dst=10.40.0.2 sport=4540 dport=38185
 [UPDATE] tcp      6 432000 ESTABLISHED src=10.40.0.2 dst=10.40.0.1 sport=38185 dport=4540 src=10.40.0.1 dst=10.40.0.2 sport=4540 dport=38185 [ASSURED]
 [UPDATE] tcp      6 120 FIN_WAIT src=10.40.0.2 dst=10.40.0.1 sport=38185 dport=4540 src=10.40.0.1 dst=10.40.0.2 sport=4540 dport=38185 [ASSURED]
 [UPDATE] tcp      6 60 CLOSE_WAIT src=10.40.0.2 dst=10.40.0.1 sport=38185 dport=4540 src=10.40.0.1 dst=10.40.0.2 sport=4540 dport=38185 [ASSURED]
 [UPDATE] tcp      6 30 LAST_ACK src=10.40.0.2 dst=10.40.0.1 sport=38185 dport=4540 src=10.40.0.1 dst=10.40.0.2 sport=4540 dport=38185 [ASSURED]
 [UPDATE] tcp      6 120 TIME_WAIT src=10.40.0.2 dst=10.40.0.1 sport=38185 dport=4540 src=10.40.0.1 dst=10.40.0.2 sport=4540 dport=38185 [ASSURED]
[DESTROY] tcp      6 src=10.40.0.2 dst=10.40.0.1 sport=37974 dport=4540 src=10.40.0.1 dst=10.40.0.2 sport=4540 dport=37974 [ASSURED]
[DESTROY] tcp      6 src=10.40.0.2 dst=10.40.0.1 sport=37976 dport=4540 src=10.40.0.1 dst=10.40.0.2 sport=4540 dport=37976 [ASSURED]
[DESTROY] tcp      6 src=10.40.0.2 dst=10.40.0.1 sport=37979 dport=4540 src=10.40.0.1 dst=10.40.0.2 sport=4540 dport=37979 [ASSURED]
[DESTROY] tcp      6 src=10.40.0.2 dst=10.40.0.1 sport=37980 dport=4540 src=10.40.0.1 dst=10.40.0.2 sport=4540 dport=37980 [ASSURED]
    [NEW] tcp      6 120 SYN_SENT src=10.40.0.2 dst=10.40.0.1 sport=38193 dport=4540 [UNREPLIED] src=10.40.0.1 dst=10.40.0.2 sport=4540 dport=38193
 [UPDATE] tcp      6 60 SYN_RECV src=10.40.0.2 dst=10.40.0.1 sport=38193 dport=4540 src=10.40.0.1 dst=10.40.0.2 sport=4540 dport=38193
 [UPDATE] tcp      6 432000 ESTABLISHED src=10.40.0.2 dst=10.40.0.1 sport=38193 dport=4540 src=10.40.0.1 dst=10.40.0.2 sport=4540 dport=38193 [ASSURED]
 [UPDATE] tcp      6 120 FIN_WAIT src=10.40.0.2 dst=10.40.0.1 sport=38193 dport=4540 src=10.40.0.1 dst=10.40.0.2 sport=4540 dport=38193 [ASSURED]
 [UPDATE] tcp      6 60 CLOSE_WAIT src=10.40.0.2 dst=10.40.0.1 sport=38193 dport=4540 src=10.40.0.1 dst=10.40.0.2 sport=4540 dport=38193 [ASSURED]
 [UPDATE] tcp      6 30 LAST_ACK src=10.40.0.2 dst=10.40.0.1 sport=38193 dport=4540 src=10.40.0.1 dst=10.40.0.2 sport=4540 dport=38193 [ASSURED]
 [UPDATE] tcp      6 120 TIME_WAIT src=10.40.0.2 dst=10.40.0.1 sport=38193 dport=4540 src=10.40.0.1 dst=10.40.0.2 sport=4540 dport=38193 [ASSURED]
^Cconntrack v1.4.0 (conntrack-tools): 18 flow events have been shown.
$ sudo conntrack -E -p tcp -d 10.44.0.0
    [NEW] tcp      6 120 SYN_SENT src=10.40.0.2 dst=10.44.0.0 sport=44790 dport=4540 [UNREPLIED] src=10.44.0.0 dst=10.40.0.2 sport=4540 dport=44790
 [UPDATE] tcp      6 60 SYN_RECV src=10.40.0.2 dst=10.44.0.0 sport=44790 dport=4540 src=10.44.0.0 dst=10.40.0.2 sport=4540 dport=44790
 [UPDATE] tcp      6 432000 ESTABLISHED src=10.40.0.2 dst=10.44.0.0 sport=44790 dport=4540 src=10.44.0.0 dst=10.40.0.2 sport=4540 dport=44790 [ASSURED]
 [UPDATE] tcp      6 120 FIN_WAIT src=10.40.0.2 dst=10.44.0.0 sport=44790 dport=4540 src=10.44.0.0 dst=10.40.0.2 sport=4540 dport=44790 [ASSURED]
 [UPDATE] tcp      6 60 CLOSE_WAIT src=10.40.0.2 dst=10.44.0.0 sport=44790 dport=4540 src=10.44.0.0 dst=10.40.0.2 sport=4540 dport=44790 [ASSURED]
 [UPDATE] tcp      6 30 LAST_ACK src=10.40.0.2 dst=10.44.0.0 sport=44790 dport=4540 src=10.44.0.0 dst=10.40.0.2 sport=4540 dport=44790 [ASSURED]
 [UPDATE] tcp      6 120 TIME_WAIT src=10.40.0.2 dst=10.44.0.0 sport=44790 dport=4540 src=10.44.0.0 dst=10.40.0.2 sport=4540 dport=44790 [ASSURED]
[DESTROY] tcp      6 src=10.40.0.2 dst=10.44.0.0 sport=44580 dport=4540 src=10.44.0.0 dst=10.40.0.2 sport=4540 dport=44580 [ASSURED]
    [NEW] tcp      6 120 SYN_SENT src=10.40.0.2 dst=10.44.0.0 sport=44792 dport=4540 [UNREPLIED] src=10.44.0.0 dst=10.40.0.2 sport=4540 dport=44792
 [UPDATE] tcp      6 60 SYN_RECV src=10.40.0.2 dst=10.44.0.0 sport=44792 dport=4540 src=10.44.0.0 dst=10.40.0.2 sport=4540 dport=44792
 [UPDATE] tcp      6 432000 ESTABLISHED src=10.40.0.2 dst=10.44.0.0 sport=44792 dport=4540 src=10.44.0.0 dst=10.40.0.2 sport=4540 dport=44792 [ASSURED]
 [UPDATE] tcp      6 120 FIN_WAIT src=10.40.0.2 dst=10.44.0.0 sport=44792 dport=4540 src=10.44.0.0 dst=10.40.0.2 sport=4540 dport=44792 [ASSURED]
 [UPDATE] tcp      6 30 LAST_ACK src=10.40.0.2 dst=10.44.0.0 sport=44792 dport=4540 src=10.44.0.0 dst=10.40.0.2 sport=4540 dport=44792 [ASSURED]
 [UPDATE] tcp      6 120 TIME_WAIT src=10.40.0.2 dst=10.44.0.0 sport=44792 dport=4540 src=10.44.0.0 dst=10.40.0.2 sport=4540 dport=44792 [ASSURED]
[DESTROY] tcp      6 src=10.40.0.2 dst=10.44.0.0 sport=44587 dport=4540 src=10.44.0.0 dst=10.40.0.2 sport=4540 dport=44587 [ASSURED]
[DESTROY] tcp      6 src=10.40.0.2 dst=10.44.0.0 sport=44588 dport=4540 src=10.44.0.0 dst=10.40.0.2 sport=4540 dport=44588 [ASSURED]
[DESTROY] tcp      6 src=10.40.0.2 dst=10.44.0.0 sport=44589 dport=4540 src=10.44.0.0 dst=10.40.0.2 sport=4540 dport=44589 [ASSURED]
^Cconntrack v1.4.0 (conntrack-tools): 17 flow events have been shown.

@fons you're containers don't have weave IP address (or mac addresses). Did you start scope before weave? Could you post the output of docker logs weavescope please?

$ docker logs weavescope
<probe> 2015/12/17 16:38:59 probe starting, version 0.11.1, ID 69d328f551512825
<probe> 2015/12/17 16:38:59 publishing to: 127.0.0.1, 172.31.0.12, 172.31.0.10
<app> 2015/12/17 16:38:59 app starting, version 0.11.1, ID 3fcfc699bb0228df
<app> 2015/12/17 16:38:59 listening on :4040
<probe> 2015/12/17 16:38:59 Error fetching app details: Get http://127.0.0.1:4040/api: dial tcp 127.0.0.1:4040: getsockopt: connection refused
<probe> 2015/12/17 16:38:59 Control connection to 127.0.0.1:4040 starting
<probe> 2015/12/17 16:38:59 Error fetching app details: Get http://172.31.0.12:4040/api: dial tcp 172.31.0.12:4040: getsockopt: connection refused
<probe> 2015/12/17 16:38:59 Error fetching app details: Get http://172.31.0.10:4040/api: dial tcp 172.31.0.10:4040: getsockopt: connection refused
<probe> 2015/12/17 16:38:59 Publish loop for 127.0.0.1:4040 starting
<probe> 2015/12/17 16:38:59 docker container: collecting stats for 0115360f10c6a0461482ff022881ed7d0bae0c9be0809d497c5768aa157dcf49
<probe> 2015/12/17 16:38:59 docker container: collecting stats for 96de65f6c7dce265abcd6dd4a6119a054aec4ce315d12819171be94761111404
<probe> 2015/12/17 16:39:03 docker container: collecting stats for 59ffda8847cd80595ba7a85d6d09a86447f8164deaf3f8670a73e886dc5ef24d
<probe> 2015/12/17 16:39:05 docker container: error reading event, did container stop? read unix @->/var/run/docker.sock: use of closed network connection
<probe> 2015/12/17 16:39:05 docker container: stopped collecting stats for 59ffda8847cd80595ba7a85d6d09a86447f8164deaf3f8670a73e886dc5ef24d
<probe> 2015/12/17 16:39:07 docker container: collecting stats for e74e74676f4aec4eafd78cdfffa10534a5fdc95acb5c6902ad81860251968871
<probe> 2015/12/17 16:39:07 docker container: error reading event, did container stop? read unix @->/var/run/docker.sock: use of closed network connection
<probe> 2015/12/17 16:39:07 docker container: stopped collecting stats for e74e74676f4aec4eafd78cdfffa10534a5fdc95acb5c6902ad81860251968871
<probe> 2015/12/17 16:39:09 docker container: collecting stats for 3e6c5c0c2e710136e6719f7d688684216125ba8bccf5f91b783839972fce5f3e
<probe> 2015/12/17 16:39:09 Control connection to 127.0.0.1:4040 starting
<probe> 2015/12/17 16:39:09 Control connection to 172.31.0.12:4040 starting
<probe> 2015/12/17 16:39:09 Control connection to 172.31.0.10:4040 starting
<probe> 2015/12/17 16:39:09 Publish loop for 127.0.0.1:4040 exiting
<probe> 2015/12/17 16:39:09 Closing control connection to 127.0.0.1:4040
<probe> 2015/12/17 16:39:09 Closing control connection to 127.0.0.1:4040
<probe> 2015/12/17 16:39:09 Control connection to 127.0.0.1:4040 exiting
<probe> 2015/12/17 16:39:10 docker container: error reading event, did container stop? read unix @->/var/run/docker.sock: use of closed network connection
<probe> 2015/12/17 16:39:10 docker container: stopped collecting stats for 0115360f10c6a0461482ff022881ed7d0bae0c9be0809d497c5768aa157dcf49
<probe> 2015/12/17 16:39:10 Publish loop for 172.31.0.10:4040 starting
<probe> 2015/12/17 16:39:10 Publish loop for 127.0.0.1:4040 starting
<probe> 2015/12/17 16:39:10 Publish loop for 172.31.0.12:4040 starting
<probe> 2015/12/17 16:39:11 docker container: collecting stats for 702b953bc0bd3c28914637522534cc25568762fc88bb4545a9b484bebb3df94d
<probe> 2015/12/17 16:39:11 docker container: collecting stats for 4216d8e33fbbce4540ed635bde733c4fba1d9d0b9d3af943db3356291fa1f04e
<probe> 2015/12/17 16:39:12 docker container: error reading event, did container stop? read unix @->/var/run/docker.sock: use of closed network connection
<probe> 2015/12/17 16:39:12 docker container: stopped collecting stats for 702b953bc0bd3c28914637522534cc25568762fc88bb4545a9b484bebb3df94d
<probe> 2015/12/17 16:39:16 docker container: collecting stats for acec0ea1014c3e025102a97a163d570298164b3ef76ba6035872cfc957beaa1c
<probe> 2015/12/17 16:39:16 docker container: collecting stats for 4bdaec5aabb5ab1c3a6e53d1eb3e15661737b226a78ecc3127fbe0f6f1b18f73
<probe> 2015/12/17 16:39:16 docker container: error reading event, did container stop? read unix @->/var/run/docker.sock: use of closed network connection
<probe> 2015/12/17 16:39:16 docker container: stopped collecting stats for 4bdaec5aabb5ab1c3a6e53d1eb3e15661737b226a78ecc3127fbe0f6f1b18f73
<probe> 2015/12/17 16:39:47 docker container: collecting stats for 7c3a973f73c0523601e82b8eb87e0d4591e3171351b6523841b960cce5f1c96e
<probe> 2015/12/17 16:39:47 docker container: collecting stats for f0332ea4cb41463bfc615649db74101c4437a6174a0b83405618b689300b0f5e
<probe> 2015/12/17 16:39:48 docker container: collecting stats for d6e6fcdc225feb73639e638157d7835f2ed3db0bdddfbb627e967cd1cfe0351a
<probe> 2015/12/17 16:39:48 docker container: error reading event, did container stop? read unix @->/var/run/docker.sock: use of closed network connection
<probe> 2015/12/17 16:39:48 docker container: stopped collecting stats for d6e6fcdc225feb73639e638157d7835f2ed3db0bdddfbb627e967cd1cfe0351a
<probe> 2015/12/17 16:39:50 docker container: collecting stats for 23936b16209bdb0c4c2391a031ad2e2df420e5d9a5957458eedf4eccd2be9a75
<probe> 2015/12/17 16:39:50 docker container: error reading event, did container stop? read unix @->/var/run/docker.sock: use of closed network connection
<probe> 2015/12/17 16:39:50 docker container: stopped collecting stats for 23936b16209bdb0c4c2391a031ad2e2df420e5d9a5957458eedf4eccd2be9a75
<probe> 2015/12/17 16:49:10 docker container: collecting stats for 250d04ce1763f739bfc88b382d30350881ab821100f0c24c4fa9b3c278e6173f
<probe> 2015/12/17 16:49:10 docker container: stopped collecting stats for 250d04ce1763f739bfc88b382d30350881ab821100f0c24c4fa9b3c278e6173f
<probe> 2015/12/17 16:49:36 docker container: collecting stats for 49a4bd162d6b8e355b9e43a353069976c8dbfc93491db75f55bcc6317566b038
<probe> 2015/12/17 16:49:36 docker container: stopped collecting stats for 49a4bd162d6b8e355b9e43a353069976c8dbfc93491db75f55bcc6317566b038
<probe> 2015/12/17 16:49:43 docker container: collecting stats for 751d059871c5f3457b59c9d0c5e00a38e72fe7204f6473152bb57101ac8d9555
<probe> 2015/12/17 16:49:43 docker container: stopped collecting stats for 751d059871c5f3457b59c9d0c5e00a38e72fe7204f6473152bb57101ac8d9555
<probe> 2015/12/17 16:54:50 docker container: collecting stats for 29caaeb35f3cbf49dc1f48018b024e1233aa0ca6c0b28316361f6a1906c40739
<probe> 2015/12/17 16:54:50 docker container: stopped collecting stats for 29caaeb35f3cbf49dc1f48018b024e1233aa0ca6c0b28316361f6a1906c40739

@tomwilkie
Copy link
Contributor

@fons it should print something like:

vagrant@vagrant-ubuntu-vivid-64:~/src/github.com/weaveworks/scope/experimental/multitenant$ docker logs weavescope
Exposing host to weave network.
10.32.0.12
Weave container detected at 172.17.0.2, Docker bridge at 172.17.0.1
<probe> 2015/12/17 13:45:42 probe starting, version 1ef6006, ID 344e6a5cec61ea05
<app> 2015/12/17 13:45:42 app starting, version 1ef6006, ID 812c78e5e043398
<probe> 2015/12/17 13:45:42 publishing to: localhost:4040, scope.weave.local:4040
<app> 2015/12/17 13:45:42 listening on :4040

This suggests scope was brought up before weave, or it failed to detect weave in some other way. @awh you're off the hook.

@2opremio
Copy link
Contributor Author

After rebooting scope, the logs show

$ docker logs weavescope
Exposing host to weave network.
10.40.0.3
Weave container detected at 127.0.0.1, Docker bridge at 172.17.0.1
<probe> 2015/12/17 16:59:32 probe starting, version 0.11.1, ID 339104be00cc6398
<probe> 2015/12/17 16:59:32 publishing to: 127.0.0.1, 172.31.0.12, 172.31.0.10
<app> 2015/12/17 16:59:32 app starting, version 0.11.1, ID 4ac02287b14914bf
<app> 2015/12/17 16:59:32 listening on :4040
<probe> 2015/12/17 16:59:32 Control connection to 127.0.0.1:4040 starting
<probe> 2015/12/17 16:59:32 docker container: collecting stats for ffa24c478e15a515124f6ae79b45a12681de310b3d199a479f12ce2020f32c32
<probe> 2015/12/17 16:59:32 Publish loop for 127.0.0.1:4040 starting
<probe> 2015/12/17 16:59:32 docker container: collecting stats for 7c3a973f73c0523601e82b8eb87e0d4591e3171351b6523841b960cce5f1c96e
<probe> 2015/12/17 16:59:32 Control connection to 172.31.0.12:4040 starting
<probe> 2015/12/17 16:59:32 Publish loop for 172.31.0.12:4040 starting
<probe> 2015/12/17 16:59:32 docker container: collecting stats for f0332ea4cb41463bfc615649db74101c4437a6174a0b83405618b689300b0f5e
<probe> 2015/12/17 16:59:32 Control connection to 172.31.0.10:4040 starting
<probe> 2015/12/17 16:59:32 docker container: collecting stats for acec0ea1014c3e025102a97a163d570298164b3ef76ba6035872cfc957beaa1c
<probe> 2015/12/17 16:59:32 Publish loop for 172.31.0.10:4040 starting
<probe> 2015/12/17 16:59:32 docker container: collecting stats for 4216d8e33fbbce4540ed635bde733c4fba1d9d0b9d3af943db3356291fa1f04e
<probe> 2015/12/17 16:59:32 docker container: collecting stats for 3e6c5c0c2e710136e6719f7d688684216125ba8bccf5f91b783839972fce5f3e

This seems to suggest that scope now sees the weave network

Exposing host to weave network.
10.40.0.3
Weave container detected at 127.0.0.1, Docker bridge at 172.17.0.1

Did you start scope before weave?

Scope is guaranteed tot start after Weave due to https://github.com/weaveworks/integrations/blob/master/aws/ecs/packer/to-upload/scope.conf#L19

However, it could be Scope still launches too early, before Scope can recognize the weave network. So this seems another occurrence of #510

Now I can see one connection to a dataproducer, but I am still missing two connections:

screen shot 2015-12-17 at 17 04 19

Here's the report: https://gist.github.com/2opremio/0086436fb831e82dc562

@tomwilkie
Copy link
Contributor

Pls check the logs on the other two machines contain the "Weave container detected at" line.

@2opremio
Copy link
Contributor Author

You are right after restarting scope in those other two machines, everything works as expected. This is another instance of #510

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants