You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Incoming messages should be automatically scanned for references to blobs and that information should be saved in the database. Blobs referenced in messages created by feeds up to n hops away should be automatically retrieved (probably just friends).
The following message content types should be scanned for blobs:
post
about
It is unclear if other messages reference blobs. A different approach should involve parsing the raw content bytes to find anything that matches the &<hash>.sha256 format.
We also need a mechanism to request blob retrieval on demand. This will be used for blobs further away than n hops.
Blobs have a wants mechanism which allow peers to forward wants from other nodes. It is unclear if this is mandatory to implement. I think this could be skipped for now.
When it comes to creating blobs: right now creating blobs is largely decoupled from creating messages that reference them. A blob is added which results in a ref and then that ref is included in a message. This means that if you give up on creating that message you are going to have an orphaned blob in your program. Maybe it makes sense to develop some new approach which forces the user to call some code to get a ref to a blob but that blob will not become live (or will get cleaned up after some time) unless the user actually embeds that blob in a message somehow.
Progress tracker (incomplete):
persist information about blobs
scan messages for blobs
replication manager? (prefetch some blobs, fetch others on demand)
persist blobs
forward remote wants (let's not do this now)
handle incoming blobs.get
handle incoming blobs.getSlice
handle incoming blobs.createWants
reply with "has" when "want" is received
cleanup wants processes after incoming and outgoing streams are disconnected
after replicating a blob check if someone would like to know about it (wants)
cleanup remote wants after all of our streams disconnect?
persist an "on demand" want list
revamp the want list so that the blobs that we retrieve are removed from it (probably just to the on-demand want list for now)
after UI asks for a blob on demand persist it in the want list for a specific amount of time
refresh want list when some events are emitted
redo on-demand want list cleanups (consult @czeslavo to figure out how cron+command?)
create blobs
The text was updated successfully, but these errors were encountered:
Incoming messages should be automatically scanned for references to blobs and that information should be saved in the database. Blobs referenced in messages created by feeds up to
n
hops away should be automatically retrieved (probably just friends).The following message content types should be scanned for blobs:
post
about
It is unclear if other messages reference blobs. A different approach should involve parsing the raw content bytes to find anything that matches the
&<hash>.sha256
format.We also need a mechanism to request blob retrieval on demand. This will be used for blobs further away than
n
hops.Blobs have a wants mechanism which allow peers to forward wants from other nodes. It is unclear if this is mandatory to implement. I think this could be skipped for now.
When it comes to creating blobs: right now creating blobs is largely decoupled from creating messages that reference them. A blob is added which results in a ref and then that ref is included in a message. This means that if you give up on creating that message you are going to have an orphaned blob in your program. Maybe it makes sense to develop some new approach which forces the user to call some code to get a ref to a blob but that blob will not become live (or will get cleaned up after some time) unless the user actually embeds that blob in a message somehow.
Progress tracker (incomplete):
blobs.get
blobs.getSlice
blobs.createWants
The text was updated successfully, but these errors were encountered: