Skip to content
This repository has been archived by the owner on May 16, 2023. It is now read-only.

Make it possible to optionally share the exact time of encounters #266

Closed
Ein-Tim opened this issue Nov 9, 2020 · 36 comments
Closed

Make it possible to optionally share the exact time of encounters #266

Ein-Tim opened this issue Nov 9, 2020 · 36 comments
Assignees
Labels
enhancement New feature or request mirrored-to-jira This item is also tracked internally in JIRA

Comments

@Ein-Tim
Copy link
Contributor

Ein-Tim commented Nov 9, 2020

Feature description

If a positive tested person is willing to upload his DK's to the server (and warn others), it would be nice if he/she gets the option to manually allow that the exact time of the Encounter is shown to his contacts.

Problem and motivation

The exact Time would help the Contact Persons to better assume their risk and so help them considering their next steps.

Additional Information

I know that for this the API from Apple/Google would have to be updated, but since I see that very many people (here in the community) would love to see such a feature, this would be IMHO the best way to integrate it (if we want to keep the highest level of privacy).


Related Issues


Internal Tracking ID: EXPOSUREAPP-3670

@Ein-Tim Ein-Tim changed the title Make it possible to optionally share the exact time of an Encounter. Make it possible to optionally share the exact time of an Encounter Nov 9, 2020
@Ein-Tim Ein-Tim changed the title Make it possible to optionally share the exact time of an Encounter Make it possible to optionally share the exact time of Encounters Nov 9, 2020
@akuckartz
Copy link

I am opposed to all such "voluntary" features.

The reason are demands by representatives of the government party CDU to force people to use the app:
httpss://www.butenunbinnen.de/nachrichten/gesellschaft/corona-app-pflicht-cdu-roewekamp-bremen-100.html

@Ein-Tim
Copy link
Contributor Author

Ein-Tim commented Nov 10, 2020

@akuckartz
But how is a voluntary feature like this connected to force the people to use the App? Since it's voluntary nobody gets forced in anyway by this Feature

@akuckartz
Copy link

@Ein-Tim Those authoritarians would potentially also force people to use such features.

@Ein-Tim
Copy link
Contributor Author

Ein-Tim commented Nov 10, 2020

Okay @akuckartz but then your problem is not that this Feature would be voluntary, but about the Feature itself, right?

@akuckartz
Copy link

@Ein-Tim Exactly.

@heinezen heinezen added enhancement New feature or request mirrored-to-jira This item is also tracked internally in JIRA labels Nov 10, 2020
@heinezen
Copy link
Member

Hey @Ein-Tim ,

I've mirrored this request and the related issue #206 to Jira, so that we can get proper feedback from the developers and the RKI. We will montor the feedback here and report back if we get any news.

Regards,
CH


Corona-Warn-App Open Source Team

@OlympianRevolution
Copy link

OlympianRevolution commented Jan 6, 2022

Is there any update on this feature?

It seems like it is a great way to give people more information about contacts if they themselves would be willing to share the exact times of their warnings. It seems to be one of the most requested features, at least on twitter.

Furthermore I believe that denying normal users the possibility of knowing the exact time while allowing it for technically savy users is unfair and does not protect anyones privacy. Ie technically savy users are allowed to deanonymize by time (via root or CCTG + Companion) but normal users are not.

Have for example Google and Apple categorically rejected any such changes to the ENF?
Have internal analysis concluded that allowing technically savy users to know the time but not everyone still protects privacy?

Thanks for any response.

@dsarkar
Copy link
Member

dsarkar commented Jan 6, 2022

Hi @OlympianRevolution,

Thanks for asking: I am afraid there are no updates in this matter. The issue of data protection and preserving privacy is paramount in this project, see e.g. https://github.com/corona-warn-app/cwa-app-android/issues/4613#issuecomment-1002605566. Wide acceptance of this app is largely based on maintaining privacy for both, the person who shares his positive result and the user being warned.

In the mentioned other project/app, the developers do not use the Google ENF framework, but they coded their own solution independent of the Google ENF framework. Therefore I would say that the wording "allowed" (by whom anyway?) is not applicable.

The app serves its primary purpose of anonymous contact tracing and anonymous warning.
This can be seen also here:

@OlympianRevolution
Copy link

OlympianRevolution commented Jan 6, 2022

Thanks for the response.

I still disagree that the privacy/anonymity with respect to time data of the person sharing the result is currently being protected, even if that was the intent.

status quo:

Technically savy people are easily able to deanonymize via time. It is legal since it is legal to record bluetooth signals and record when I recieved them and it is legal to root my device and analyze the internal ENF database. Ie timing info is not secure and thus timing based privacy is currently not being maintained.

I see two options both of which make much more sense than the status quo:

  1. Protect timing data via encryption not just access to root and deny access to the positive result database to apps such as CCTG (have to find solution for huawei if we do that). Thus technically savy people also would not have access to timing data.
  2. Allow all users to have access to the same timing (and dB) information that technically savy users currently have.

@OlympianRevolution
Copy link

I do not doubt it serves its purpose but I think it is clear that more information would help users make more informed decisions as well and thus could help it serve its purpose better.

@heinezen
Copy link
Member

heinezen commented Jan 7, 2022

@OlympianRevolution

Technically savy people are easily able to deanonymize via time. It is legal since it is legal to record bluetooth signals and record when I recieved them and it is legal to root my device and analyze the internal ENF database. Ie timing info is not secure and thus timing based privacy is currently not being maintained.

I think there is a misunderstanding here. The problem with privacy would not be that you personally can recover this information from your phone. That's always possible with root access on client devices. The problem is that the Corona app would get access to this information. Since pretty much all Corona apps that use ENF are supplied by governments, there are reasonable concerns that this data might be collected and misused by said actors to undermine privacy. Therefore, it is a matter of trust that these apps only get/use the minimum required privileges for fulfilling their purpose.

Besides that, knowing the exact time of the encounters is mostly irrelevant for the risk calculation, except for determining the length of an encounter. The CWA tries to estimate whether the user was exposed to a viral load of COVID with a high likelihood of infection. It does not matter whether this load was received at 7am in the morning or at 3pm in the afternoon. The possible outcome (i.e. an infection) is what matters.

I do not doubt it serves its purpose but I think it is clear that more information would help users make more informed decisions as well and thus could help it serve its purpose better.

I've seen this argument a lot, but I fail to see where getting the exact time of exposures would lead to "more informed decisions". In fact, the process after receiving a red warning card is straight-forward: Self-isolate and get tested. I would say there aren't even that many decisions to make here. The most relevant decision that I could think of is the decision to share a positive test result, but that has little to do with the time of encounters.

@Ein-Tim
Copy link
Contributor Author

Ein-Tim commented Jan 7, 2022

@OlympianRevolution Maybe you'd like to add your comments here: #363

@OlympianRevolution
Copy link

@OlympianRevolution Maybe you'd like to add your comments here: #363

not sure, #363 seems very general... unless we can show it is actually the most requested feature... twitter crawling?

@Ein-Tim
Copy link
Contributor Author

Ein-Tim commented Jan 7, 2022

@OlympianRevolution

Nevermind, @dsarkar already mentioned the exact time of risk encounters in the OP of the issue, so I'm sure the team knows that this is a desired feature.

@OlympianRevolution
Copy link

@heinezen Thank you for your reply!

Still I disagree with some of your statements.

Deanonymization of users by users is frequently given as a reason not to give the exact time, good to know this was not the intent of the implementation.

I believe it would be technically possible to restrict the knowledge of the exact time to RAM while computing the risk. This would make it very difficult even for root users to determine the exact time stamps but of course that would entail a significant change to the ENF and is thus likely not feasible.

Of course apps should only get timestamps of positive case contacts. A malevolent government already has in their server those positive RPIs including the rolling_start_interval_number (pretty exact time). I do not see how the exact time a contact occurred benefits a malevolent government in finding out with whom an app user had contact. Before they knew that they had contact with one or more of the many anonymous RPIs on a given day and afterwards they know they had contact with one of many anonymous RPIs at a specific time. The total number of people supplying RPIs should be very similar for each rolling interval or for the entire day. It is possible I am missing an attack vector, and I would be happy if someone were to describe the potential deanonymization by a malevolent app if given the exact time.

Besides that, knowing the exact time of the encounters is mostly irrelevant for the risk calculation, except for determining the length of an encounter. The CWA tries to estimate whether the user was exposed to a viral load of COVID with a high likelihood of infection. It does not matter whether this load was received at 7am in the morning or at 3pm in the afternoon. The possible outcome (i.e. an infection) is what matters.

I've seen this argument a lot, but I fail to see where getting the exact time of exposures would lead to "more informed decisions". In fact, the process after receiving a red warning card is straight-forward: Self-isolate and get tested. I would say there aren't even that many decisions to make here. The most relevant decision that I could think of is the decision to share a positive test result, but that has little to do with the time of encounters.

Yes it is irrelevant according to the current risk calculation scheme but it is not irrelevant to the underlying true probability distribution. Ie currently both 9 minutes in a well ventilated space with everyone wearing an FFP2 mask and 4 hours in a club partying without a mask are given the same risk status, which is patently untrue, because the app can of course not know my behavior. The good thing is that I might!

In the first case I might take symptoms more seriously, reduce contacts, where an FFP2 more fastidiously and do some selftests, but not go through the hassle of convincing someone to do a PCR test and quarantining myself even from important meetings. In the second case I would try to get a PCR test and isolate.

Another example that is frequently brought up is that people know that at the given time they were alone at home or in a car and thus would know that this is likely a false positive warning.

Thanks for reading! BG Christian

@OlympianRevolution
Copy link

Potential privacy concerns of knowing the exact timestamp can be mitigated exactly by this feature. Making it an os level opt in.

@ndegendogo
Copy link

@OlympianRevolution I don't see how an OS-level opt-in can mitigate the privacy concerns here.
The original specification and implementation of ENF/ENS by Apple/ Google is to hide this detail information from the app and user as much as possible and reasonable.
Yes, the re-implementation of this spec by microG / CCTG allows tech-savvy users more access than the original. Same with rooted Android phones. Btw, not for iPhone, afaik.

As a positive-tested user A, I am willing to share my DEK keys to support the contact tracing, and I trust on the privacy of my data.
It doesn't help me if recipient user B can then just opt-in to see this data.

@jucktnich
Copy link

@ndegendogo It should be up to the sharing user if he wants the date & time displayed to the warned users.

@OlympianRevolution
Copy link

I would say it works like read reciepts. You get exact times if you share exact times...

@ndegendogo
Copy link

@jucktnich @OlympianRevolution

I would say it works like read reciepts. You get exact times if you share exact times...

Sounds reasonable.
Still, even if I have agreed to share my own exact timestamps, I should get only the exact timestamps from other users who have themselves agreed to share their exact timestamps. (I must admit that I don't know exactly how these read receipts are implemented...)

It should be up to the sharing user if he wants the date & time displayed to the warned users.

Agreed. However, this means an extension of the ENF/ENS, so it will need collaboration from Apple/Google.
The current ENF/ENS specification and implementation collects RPI without such metadata like a consent.
Of course such a consent could be sent to the server when I share my DEK keys. Currently I can voluntarily add information like the date of my symptoms (if I have them). Still also in this case the consent info needs to be downloaded with the key bundle, which is then processed by the ENF.

@markum
Copy link

markum commented Jan 9, 2022

Thanks for asking: I am afraid there are no updates in this matter. The issue of data protection and preserving privacy is paramount in this project, see e.g. https://github.com/corona-warn-app/cwa-app-android/issues/4613#issuecomment-1002605566. Wide acceptance of this app is largely based on maintaining privacy for both, the person who shares his positive result and the user being warned.

I believe that this standpoint should be reconsidered in the current situation, where people receive more and more red warnings. Mostly false alarms, turning the app with more cases basically useless. You can not react to every red alarm anymore. Without CCTG which provides the information when and how long a risk contact was, I would have given up on contact tracing app. CCTG allowed me to learn that in all cases where I had a red screen I was safe & protected through a well fitted mask, a wall or other measures. When number of cases rises more the acceptance of the or following its warning will fall when the app does not provide good enough information. This should be weighted against the acceptance that the acceptance of the app will go down if this features is provided.
I am really concerned about privacy, but I do not see a real privacy issue. If someone in my surroundings contracts covid, I will surely learn about this. Covid is not something like AIDS which has a stigma. So the real issue about knowing the time of a risk contact is to sort out all the situations where you surely knew you were safe. It is not about some more or less anonymous person who randomly crossed your path while were wearing a mask or a wall was between you and the other person.

@heinezen
Copy link
Member

Deanonymization of users by users is frequently given as a reason not to give the exact time, good to know this was not the intent of the implementation.

Avoiding this kind of deanonymization is still relevant fo preserving data privacy. The time of an encounter must be stored somewhere on the client device, that is unavoidadable. But just because you could find out that value this doesn't mean that the app should display it or that the value has relevance for determining the risk. For that reason, saying that the app "denies" this information is misleading. The app also does not show a bunch of other information like the transmission risk level of an encounter or the value of the RPIs because the vast majority of users cannot do anything with that.

Of course apps should only get timestamps of positive case contacts. A malevolent government already has in their server those positive RPIs including the rolling_start_interval_number (pretty exact time).

This is incorrect. The server neither gets any RPIs or associated metadata and neither does the app. RPIs and metadata are stored internally inside the ENF database. When a user is tested positive and shares the result, the CWA server only receives and stores the decryption keys for the shared RPIs. The CWA app inputs the downloaded keys into the ENF which handles the checking. See figure 12 and 13.

Yes it is irrelevant according to the current risk calculation scheme but it is not irrelevant to the underlying true probability distribution. Ie currently both 9 minutes in a well ventilated space with everyone wearing an FFP2 mask and 4 hours in a club partying without a mask are given the same risk status, which is patently untrue, because the app can of course not know my behavior. The good thing is that I might!
In the first case I might take symptoms more seriously, reduce contacts, where an FFP2 more fastidiously and do some selftests, but not go through the hassle of convincing someone to do a PCR test and quarantining myself even from important meetings. In the second case I would try to get a PCR test and isolate.

After reading a few other examples given by Twitter users, I think there's more misconceptions to clear up. First of all, I get the feeling that a lot of users think that the app detects the risk that they have COVID. However, the actual purpose of the app is to warn them about their risk of having infectious exposures. The two problems are strongly related, of course, but they are not the same. Conflating the two can be problematic for several reasons.

  1. The red warning in the CWA is shown when your risk score crosses a minimum threshold. This threshold signifies the point at which things should start to be taken seriously. However, the individual risk of exposure can be much higher between two different people with a red warning depending on by how much they passed the threshold. Users should not assume that a red warning is a static risk from which they can "subtract" mitigation factors such as FFP2 masks.
  2. The information in the CWA is incomplete because can only tell you about exposures it knows about (i.e. exposures that were reported via the app). Other additional exposures may have occured as not everyone uses the app. What is good about the CWA is that is will show you the risk for exposures that definitely happened but this is only an indicator of the real risk (which is impossible to accurately determine). The only way to know is to make a PCR test.
  3. From what I've seen, when people say they want to estimate the risk more accurately, they actually want to determine if they have a lower risk. The question is: Can they actually do this or is the wish the father of the thought here? Because if they can't, then all we get is people who don't take the warnings seriously enough.

All this doesn't mean that knowing the specific time would be useless in some circumstances. However, to get this feature into ENF there preferrably needs to be scientific evidence that there are benefits to this approach. Additionally, these benefits should outweigh the privacy concerns. Otherwise this will unlikely not be realized.

@ndegendogo
Copy link

ndegendogo commented Jan 10, 2022

@heinezen

I am using public transport; which means I regularly pick up exposures. So far the most were "negligible" (far away or very short, cwa doesn't even show them, I see them only in my ENF log). And of course I am wearing my FFP2 mask during transportation.
I had two or three red warnings till now, and I have always followed the advice (PCR test and stay at home / self-isolate till my negative result was available).

I have yet to find out if this behaviour is still practicable with Omikron and the expected rising incidences / numbers of exposures. As long as I cannot work from home, exposures will be unavoidable. I am mitigating these risks with my vaccination and my mask. But the app doesn't know when I am wearing my mask (during transportation) and when I am not (during lunch-time).

I cannot get PCR tests and self-isolate all the time. So I need to do "some degree" of risk assessment myself.

@Ein-Tim
Copy link
Contributor Author

Ein-Tim commented Jan 10, 2022

I agree with @ndegendogo here.

Either the app gives the user the option to take the vaccination status and other protective measures (e.g. wearing a FFP2 mask) into account for the risk calculation or the user has to have an option to somehow estimate the "real risk" of an infection.

@MartinH-open
Copy link

I still support this enhancement request.
But in regards to the comment "Either the app gives the user .. other protective measures (e.g. wearing a FFP2 mask) into account for the risk calculation" I think today the user has this option by disabling Bluetooth connection during such safe periods. But this is not very practical for many. For this works most times: I enable Bluetooth when I consider there might be some risk coming up.

@OlympianRevolution
Copy link

OlympianRevolution commented Jan 10, 2022

@heinezen Thanks for the response. As you can Imagine I am not fully convinced by some of your arguments, but thanks for correcting some of my false statements.

Avoiding this kind of deanonymization is still relevant fo preserving data privacy. The time of an encounter must be stored somewhere on the client device, that is unavoidadable

I think we can agree that simply not displaying information is not a good approach to security or privacy. If the time was meant to be private then it should have been made private and not just be not displayed to most users. (I believe it is technically possible via encrypted computing and restricted key server access, but do not have a proof of concept ready)

However, the individual risk of exposure can be much higher between two different people with a red warning depending on by how much they passed the threshold. Users should not assume that a red warning is a static risk from which they can "subtract" mitigation factors such as FFP2 masks.

I agree that is exactly why we want this feature so we can know how much we passed this threshold or if we were even actually under it. I think most users correctly assume that these minimum thresholds are based on an RKI analysis of indoor transmission between maskless unvaccinated people. (Ideally it should be documented what these thresholds are based on) If FFP2 masks and vaccines have an impact, of course an idealized risk assessment would take that into account. For example the App could ask you to enter whether and what type of mask you were wearing and what your vaccination status was at a given time and then do the risk calculation for you.

From what I've seen, when people say they want to estimate the risk more accurately, they actually want to determine if they have a lower risk. The question is: Can they actually do this or is the wish the father of the thought here? Because if they can't, then all we get is people who don't take the warnings seriously enough.

At least for my part I would also look at the case where I think the actual risk is especially high. The main reason people do not take warnings seriously enough imo has to do with false positive rates and the difficulty of getting a free PCR test. It should be in the interest of all CWA proponents to increase the true positive rate while decreasing the false positive and false negative rates by enabling a risk assessment that is closer to the true risk assessment if all factors were known.

However, to get this feature into ENF there preferably needs to be scientific evidence that there are benefits to this approach. Additionally, these benefits should outweigh the privacy concerns.

I agree that scientific evidence would be good (we should have a discussion if enough data is being collected from the data donation and a new EDUS to do that) but similar to bluetooth based contact tracing itself, the logical argument that:
if contact tracing helps -> bluetooth based contact tracing helps -> bluetooth based contact tracing with a lower false positive and false negative rate helps more
is solid.

I believe that most of the privacy concerns can be mitigated by using the opt in feature proposed in this issue. You only receive exact times if you share exact times. Ideally it should be implemented such that if you opt out, technically savvy people cannot get the exact time either, unless they engage in very sophisticated hacking.

Im fine if we come to the conclusion that Google/ Apple will make no more changes to the ENF, but I still have not really seen a good argument against allowing users to opt in to share and receive the exact time.

@cvoinf
Copy link

cvoinf commented Jan 10, 2022

I guess this question has been asked already quite often, but at this point has to be asked again:

Why does CWA depend on google's ENF?

CTTA has proven it works without this proprietary library and allows to get more information, works with old phones and does NOT need rooted devices.
ENF should be no reason for not taking this request seriously into consideration, and there is no reason for waiting for collaboration from Google.

@Ein-Tim
Copy link
Contributor Author

Ein-Tim commented Jan 10, 2022

@cvoinf

Could you open a new feature request for this?

I use CCTG on my secondary device and it gets killed quite often by the system (stops recording any exposures).

Some other thoughts on this:

  • Would Google let CWA pass an update which changes to CCTG?
  • All users would have microG (or bundled microG) on their device then, while atm they have the Google Play Services installed anyway.
  • What about iPhone users?

@ndegendogo
Copy link

@cvoinf CCTG runs on Android devices only. Same for MicroG.
For iOS-based smartphones I am not aware of any alternative to Apple's ENF library.

@Ein-Tim
Copy link
Contributor Author

Ein-Tim commented Jan 11, 2022

@heinezen What do you think of: https://www.tagesschau.de/inland/corona-warnapp-107.html which says:

"Ich weiß jetzt aber leider nicht: Schlägt die App Alarm, weil ich dort im Restaurant war, in dem auch eine infizierte Person saß? Oder stand ich nur zu lange mit Maske am Skilift in der Schlange und hinter mir befand sich eine infizierte Person?" Für die Risikobewertung sei das ein entscheidender Unterschied, so Lukowicz. Doch die App teilt einem nicht den genauen Zeitpunkt einer Risikobegegnung mit. "Datenschutz ist sicher wichtig, aber bei der App hat man es übertrieben", sagt der Experte. "Ohne Kontext, etwa zur Uhrzeit, ist der Nutzen sehr begrenzt. Denn am Ende kann der Mensch das Risiko besser einschätzen als die Technik allein."

This also seems to be the public perception of the app: Too much Datenschutz.

@OlympianRevolution
Copy link

I wouldn't even simplify it to "too much Datenschutz", this feature specifically is the most requested one IMO. And since the data is available anyways to users like me we would not even be decreasing the datenschutz, just making the same amount of data accessible for all users.

@dwt
Copy link

dwt commented Jan 12, 2022

@OlympianRevolution I hope you realise that just because the data is available to you, does not mean that it doesn't protect data of other users where the data is not available.

The idea of this restriction is that it protects the identity of those that send exposure notifications.

Sure, I as the notified person, would like the precise time and location of where I got the notification. This allows me to get a better sense of how to interpret the notification.

But it also exposes the person sending the notification much more directly and means that many more people will think twice actually sending the notification.

So what I'm saying is this: There is a balance here to strike, how detailed the information is so you can meaningfully act and how fuzzy it has to be so everybody has no qualms sending it. Apple and Google have decided that a one day resolution is the trade-off they are making - any change has to first be taken up with them. I would suggest starting a scientific study that shows how people actually perceive this tradeoff and how much the willingness of people to share notifications changes with more detailed reports.

But given that we still have far too few people actually using (and sending) exposure notifications, I would wager that any change here would not increase the willingness of people to send exposure notifications and that therefore any change to this policy is more harming than helping to you - as someone who actually wants to get notified if he meets a sick person.

@OlympianRevolution
Copy link

OlympianRevolution commented Jan 12, 2022

@dwt Thanks for the response!
Again I strongly disagree with many of these statements.

I hope you realise that just because the data is available to you, does not mean that it doesn't protect data of other users where the data is not available. The idea of this restriction is that it protects the identity of those that send exposure notifications.

Imagine there is data you intend to keep private say the name of the person warning. And our privacy protection implementation were to record that information in an easily extractable way but simply not display it to most users, this may be beneficial from a UI overload perspective but does not prevent the name of that person being determined and leaked by the malicious actors we are afraid of. If it is meant to be private it should be actually private for example by restricting access to the key server.

There is a balance here to strike, how detailed the information is so you can meaningfully act and how fuzzy it has to be so everybody has no qualms sending it.

I agree there is a balance to strike, but it appears that most or many users would prefer the exact time and duration. But the best part is that the solution proposed here gets the best of both worlds. You get the exact time if you are willing to share your exact time. IE users could choose on which side of the balance they want to be. No user has to divulge more information if they don't want to. (except of course that this information is being divulged to all technically savvy users)

Apple and Google have decided that a one day resolution is the trade-off they are making - any change has to first be taken up with them

I am aware, but there is no way of contacting the ENF team except via the CWA team here. I think it is valid to decline feature if Google and Apple have said that there categorically will be no changes to ENF. Still we as a community could say:
"This feature is a good idea that would help people assess their risk better while only changing privacy considerations for those that consent. The only reason we are not doing it is because Google and Apple have more power to determine infection control in Germany than the German government and its contractors."

But given that we still have far too few people actually using (and sending) exposure notifications, I would wager that any change here would not increase the willingness of people to send exposure notifications and that therefore any change to this policy is more harming than helping to you - as someone who actually wants to get notified if he meets a sick person.

I do not think that doing nothing will benefit the numbers either at this point. I think currently usage is also hampered by frequent news reports even from doctors and epidemiologists saying the warnings are not fine grained enough to adequately assess risk. You may be right that the change could cause a media stir but if people uninstall the app because some people can consent to displaying the exact time then those people should not have been using the app anyways since it was already possible for me to know the exact times and I would be happy that they were properly informed that the exact time is not protected.

@dwt
Copy link

dwt commented Jan 14, 2022

@OlympianRevolution Please note that I am not a developer participating in this project, but just another user.

From the end: " I think currently usage is also hampered by frequent news reports even from doctors and epidemiologists saying the warnings are not fine grained enough to adequately assess risk." While I am sympathetic to this point - more detailed data always seems nicer at first glance. However, the right of the warning person to have a reasonable expectation of remaining anonymous is also a really important factor. And the whole reason of the app is not to give you the most detailed risk assessment possible, but to make you aware that you spent time in a high risk environment, and then you can stay more cautious, and also commission a series of tests to remain some assurance over your own infection status.

It's not super much, but it doesn't break down like the Contact tracing of the Gesundheitsamt does if the case numbers go high.

Still, getting more people to actually install the app, use it and send exposure notifications needs to be the number one driving force - way more important than getting more precise data from those that already are.

Also, no amount of precision can help you if we as society decide that it is ok to let the pandemic wave run through the entire population - that just means you will meet infected persons all the time and cannot do much about that. (Sad as that is, and as much as I disagree with that plan).

You get the exact time if you are willing to share your exact time. IE users could choose on which side of the balance they want to be. No user has to divulge more information if they don't want to.

I am sure you realise that technically this is not possible. You publish the numbers that you have sent while being infectious, and people download these and the phone compares them to the numbers it has seen and then returns the warning level and dates. There is no way to add additional information by the warning person here - and once you have the data on your phone, there is no way to enforce that these additional restrictions are not set. (Which is the reason that you can have more detailed information - get yourself a [probably] rooted android phone, with an open source implementation of the exposure notification framework, where you can get all the details you want - nobody can stop you.)

I think these restrictions are reasonable for such a critical system, that has a world wide rollout. Even though I would like more fine grained warnings, I recognise that the date based boundary will likely stay that way for now. I am open and hopefull that we get some scientific research on this question going, and can make better facts based arguments than just 'I wold like it to be different' in the future though.

@OlympianRevolution
Copy link

thanks for the Response @dwt stilll I dont think we will agree on many of the points you raised.

However, the right of the warning person to have a reasonable expectation of remaining anonymous is also a really important factor. And the whole reason of the app is not to give you the most detailed risk assessment possible,

Yes and this feature would not change anything for users who want the current level of anonymity. I disagree, the purpose of the App is to warn users when they have crossed a certain probability threshold that the RKI considers is a good balance between true positives false positives and false negatives in order to break infection chains. Ie instead of testing everyone test those with high risk of infection. Ideally an all knowing app would even give you the probability. This probability threshold can currently only be evaluated using dB and Time but other factors that are only knowable to the affected people significantly influence this risk.

Still, getting more people to actually install the app, use it and send exposure notifications needs to be the number one driving force - way more important than getting more precise data from those that already are.

I agree but I believe from twitter and my personal fiends that the app is frequently ignored or deemed useless exactly because users cannot differentiate between truely high risk situations ("2 hours in restaurant within a few meters") and
lower risk situations ("everybody wearing FFP2 in a Baumarkt") It may be that the currently optimal features might be to make sure not too many warnings are ignored.

I am sure you realise that technically this is not possible. You publish the numbers that you have sent while being infectious, and people download these and the phone compares them to the numbers it has seen and then returns the warning level and dates. There is no way to add additional information by the warning person here - and once you have the data on your phone, there is no way to enforce that these additional restrictions are not set.

I think it is very simple to communicate consent and relatively simple to enforce.
Simply attach a flag (allowed to use time / not allowed to use time) to the uploaded positive keys. By restricting access to the key server other apps like CCTG should be unable to determine which RPIs were positive since they can no longer decrypt the RPIs. Even if they know the time they no longer know which RPIs were positive so the time is useless. The solution would be to simply restrict access to the key server to the official CWA, which then only lets the ENF keep positive RPIs with the exact time in RAM which should be much more difficult to compromise than the database that is currently kept.

Yes I know that currently the exact time is accessible to people like myself (I use it to assess my own risk. Got pcr test because of 2 hours in restaurant, did not get pcr test when I knew it was 20 min in a bus with FFP2) Exactly because it is easily accessible to technically savvy people I think it is ridiculous to pretend this information is currently private and not allow normal people to see this information.

I also think these restrictions are reasonable, they are in my opinion however not the balance that a large majority of worldwide users would prefer. The good thing is that with this feature we do not decide, the user decides. Google is a company which lets you opt in to having your position tracked 24/7, so they should reasonably not have a problem letting a user opt in to share the exact time.

I have lost most hope that german data will lead to robust scientific studies showing the efficacy of bluetooth based contact tracing (EDUS has been stopped, not enough data collected in data donation), even though I believe it is a very effective tool if people do not ignore warnings.

@Ein-Tim
Copy link
Contributor Author

Ein-Tim commented Feb 21, 2023

As the CWA project went into ramp-down mode, I don't expect this feature to be implemented. I'm therefore closing this issue.

@Ein-Tim Ein-Tim closed this as not planned Won't fix, can't repro, duplicate, stale Feb 21, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
enhancement New feature or request mirrored-to-jira This item is also tracked internally in JIRA
Projects
None yet
Development

No branches or pull requests