Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Game theoretical proofs of cooperation in Trustchain #5202

Closed
cuileri opened this issue Mar 11, 2020 · 2 comments
Closed

Game theoretical proofs of cooperation in Trustchain #5202

cuileri opened this issue Mar 11, 2020 · 2 comments

Comments

@cuileri
Copy link

cuileri commented Mar 11, 2020

Motivation:

  • Our lab has recently submitted a manuscript for the extended version of the lab's blockchain solution Trustchain. We have argued that the vested interest of peers on a specific peer brings forth the notion of “interest-based communities”, which relies on the cooperation of community members for the security promises such as fraud detection. However, the incentive / motivation / cost / feasibility / advantages of the cooperation of members are not explored. We can use game theoretical methods to model the users' interactions as a game and show that strategic cooperation is stable in equilibrium, even though it does not dominate free-riding.

After the initial literature review, it is observed that costly signaling can be used for the proof. This term is mainly used in evolutionary biology in an attempt to explain self-sacrificial behavior of individuals towards the benefit of the group. According to the theory, individuals send honest signals to the group by performing costly and hard-to-fake works, to increase their likelihood for an alliance (or a reciprocation).

A recent study built a model (which involves costly signaling) in which deception is a sybil attack.

Scope:

  • Scope of the research is not restricted yet. The problem may be generalized to the case of drivers of cooperation in decentralized systems (even markets), as it may be specialized to “the drivers of participation in the validation and audit processes of Trustchain”

Initial Hypothesis:

  • Given a high threshold for the fraction of free-riders and benevolent users, a cooperative strategy towards the security promises of Trustchain is evolutionary stable.

Work flow:

  • Read the costly-signaling literature. See whether the notion of costly-signaling is applicable to community based weak consensus idea in Truschain.
  • If not, derive a new model out of it. Explore game theory to find which type of game suits our case.
  • Find all literature on the game-theoretical modeling of the consensusless blockchains.

Next sprint:

  • Initial listing of related literature
  • A game model of fraud in interest-based communities of Trustchain. Initial definitions of players, profit/loss and strategies.
@cuileri cuileri self-assigned this Mar 11, 2020
@synctext synctext added this to the Backlog milestone Mar 12, 2020
@synctext
Copy link
Member

synctext commented Mar 12, 2020

remarks on this work, a year ago. Might be incorrect.
Deployed reputation systems within industry use distrust in their models connected to "mark as spam" or "report this profile".

@qstokkink
Copy link
Contributor

It seems like the project proposal of this issue will never get its first sprint and the technology that is addresses (Trustchain) has been removed from Tribler. Therefore, I'll close this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Development

No branches or pull requests

5 participants