Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create Interstellar M1.md (1) #394

Merged
merged 6 commits into from
Apr 7, 2022
Merged

Conversation

nashjl
Copy link
Contributor

@nashjl nashjl commented Mar 11, 2022

  • Create Interstellar M1.md

Milestone Delivery Checklist

Link to the application pull request: w3f/Grants-Program#734 please fill this in with the PR number of your application.

nashjl and others added 6 commits March 11, 2022 16:18
* Create Interstellar M1.md

* Update Interstellar M1.md

* Update Interstellar M1.md

* Update Interstellar M1.md

* Update Interstellar M1.md

* u

* update

* update

* update

* Delete settings.json

* update

* update

* u

* u

* u

* u

* u

* u

* u

* u

* update

Co-authored-by: Nathan Prat <nathan.prat@gmail.com>
@semuelle semuelle changed the title Create Interstellar M1.md (#1) Create Interstellar M1.md (1) Mar 11, 2022
@semuelle
Copy link
Member

Thank you for the delivery, @nashjl. We will look into it as soon as possible.

@ashlink11 ashlink11 self-assigned this Mar 14, 2022
@ashlink11
Copy link
Contributor

Hi @nashjl, thank you for your milestone 1 submission! My name is Ashley and I'm working on your milestone 1 evaluation here at the W3F. I've learned a lot from reading about your project and excited to test it.

I believe I've carefully installed all the prerequisites on my machine, but I keep getting this error and not able to get the ID. Any suggestions? Thanks!

Screen Shot 2022-03-24 at 10 46 59 AM

@ashlink11
Copy link
Contributor

Additional information: I did get the two docker images running, but unable to compile the OCW demo:

ashley@testbox:~/substrate-offchain-worker-demo$ RUST_LOG="warn,info" cargo run -- --dev --tmp
   Compiling serde_derive v1.0.130
   Compiling ctor v0.1.21
   Compiling thiserror-impl v1.0.29
   Compiling futures-macro v0.3.17
   Compiling impl-trait-for-tuples v0.2.1
   Compiling derive_more v0.99.16
   Compiling tracing-attributes v0.1.16
   Compiling ref-cast-impl v1.0.6
   Compiling sp-debug-derive v3.0.0 (https://github.com/paritytech/substrate.git?tag=monthly-2021-10#bf9683ee)
   Compiling dyn-clonable-impl v0.9.0
   Compiling async-trait v0.1.51
   Compiling prost-derive v0.8.0
   Compiling pin-project-internal v1.0.8
   Compiling pin-project-internal v0.4.28
   Compiling scroll_derive v0.10.5
   Compiling enum-as-inner v0.3.3
   Compiling frame-support-procedural-tools-derive v3.0.0 (https://github.com/paritytech/substrate.git?tag=monthly-2021-10#bf9683ee)
   Compiling minicbor-derive v0.6.4
   Compiling libp2p-swarm-derive v0.24.0
   Compiling data-encoding-macro-internal v0.1.10
   Compiling nalgebra-macros v0.1.0
   Compiling strum_macros v0.20.1
   Compiling zeroize_derive v1.2.0
   Compiling parity-util-mem-derive v0.1.0
   Compiling structopt-derive v0.4.16
   Compiling pest_derive v2.1.0
   Compiling ring v0.16.20
   Compiling zstd-safe v4.1.1+zstd.1.5.0
   Compiling zstd-sys v1.6.1+zstd.1.5.0
   Compiling lz4-sys v1.9.2
   Compiling librocksdb-sys v6.20.3
   Compiling lz4 v1.23.2
error: no rules expected the token `aarch64_apple`
   --> /home/ashley/.cargo/registry/src/github.com-1ecc6299db9ec823/ring-0.16.20/src/cpu.rs:257:13
    |
165 |     macro_rules! features {
    |     --------------------- when calling this macro
...
257 |             aarch64_apple: true,
    |             ^^^^^^^^^^^^^ no rules expected this token in macro call

error[E0425]: cannot find value `AES` in module `cpu::arm`
   --> /home/ashley/.cargo/registry/src/github.com-1ecc6299db9ec823/ring-0.16.20/src/aead/aes.rs:381:65
    |
381 |         if cpu::intel::AES.available(cpu_features) || cpu::arm::AES.available(cpu_features) {
    |                                                                 ^^^ not found in `cpu::arm`
    |
help: consider importing this constant
    |
15  | use crate::cpu::intel::AES;
    |

error[E0425]: cannot find value `PMULL` in module `cpu::arm`
   --> /home/ashley/.cargo/registry/src/github.com-1ecc6299db9ec823/ring-0.16.20/src/aead/gcm.rs:315:26
    |
315 |             || cpu::arm::PMULL.available(cpu_features)
    |                          ^^^^^ not found in `cpu::arm`

error[E0425]: cannot find value `ARMCAP_STATIC` in this scope
   --> /home/ashley/.cargo/registry/src/github.com-1ecc6299db9ec823/ring-0.16.20/src/cpu.rs:235:41
    |
235 |             if self.mask == self.mask & ARMCAP_STATIC {
    |                                         ^^^^^^^^^^^^^ not found in this scope

   Compiling zstd v0.9.0+zstd.1.5.0
For more information about this error, try `rustc --explain E0425`.
error: could not compile `ring` due to 4 previous errors
warning: build failed, waiting for other jobs to finish...
error: build failed

Could you please see if those errors are on your end or otherwise hopefully provide me a bit of help to resolve the errors on my end? Thank you!

@n-prat
Copy link
Contributor

n-prat commented Mar 24, 2022

Hi Ashley,
Have you by any chance tried to run cargo test before?
I am pretty sure this error pops up when the tests fail because the IPFS daemon is spawned but never cleaned up if rust does not terminate cleanly.
If yes could you try killall /usr/local/bin/ipfs(use the correct path) and try again?

@n-prat
Copy link
Contributor

n-prat commented Mar 24, 2022

For substrate-offchain-worker-demo are you on the interstellar branch?

If yes, what kind of machine are you using?

Maybe you are hitting : rust-lang/rust#95267

I am using:

pratn@DESKTOP-D8U39SO:/mnt/c/Users/nat$ rustup show
Default host: x86_64-unknown-linux-gnu
rustup home:  /home/pratn/.rustup

installed toolchains
--------------------

stable-x86_64-unknown-linux-gnu (default)
nightly-x86_64-unknown-linux-gnu

installed targets for active toolchain
--------------------------------------

thumbv6m-none-eabi
x86_64-unknown-linux-gnu

active toolchain
----------------

stable-x86_64-unknown-linux-gnu (default)
rustc 1.59.0 (9d1b2106e 2022-02-23)

@ashlink11
Copy link
Contributor

Hi @nathanprat. That was very helpful, thank you. I was on the wrong branch and was not using your toolchain, so that solved some errors already. But, let me go back to the prerequisites and take it step-by-step again and first test with Docker before trying to compile from source.

I believe I may be stuck on this instruction in your prerequisites:

"if you intend to use Docker you SHOULD be sure it is reachable by the containers eg /ip4/0.0.0.0/tcp/5001"

Sorry, I'm relatively new here at W3F and wondering if you could please explain a bit more.

Does this mean that before I launch the IPFS daemon I need to have the docker images running already?

I believe I have the docker images running:
Screen Shot 2022-03-24 at 1 04 04 PM
Screen Shot 2022-03-24 at 1 03 52 PM

This is how my daemon looks:
Screen Shot 2022-03-24 at 1 04 36 PM

This is how my ports look:
Screen Shot 2022-03-24 at 1 04 47 PM

Does it look like I have the proper addresses/ports? Should there be these 0.0.0.0 addresses or should I replace these with localhost 127.0.0.1? Or with something else? Should I change the docker ports from 3000 to 5001?

Thanks so much for your patience, quick response, & providing step-by-step instructions!

@n-prat
Copy link
Contributor

n-prat commented Mar 24, 2022

You need to have the IPFS running when you end up calling the API routes.
But you can start IPFS after the containers, or before, as you prefer.

Your IPFS daemon says API server listening on /ip4/0.0.0.0/tcp/5001 so you should be OK.

What you may need to change is the docker run parameter --ipfs-server-multiaddr : it MUST point to the address of the machine running the IPFS daemon(ie the host of the docker containers) which is usually 172.17.0.1
Check with:
image
If instead of 172.X.X.X you have 192.168.X.X you need to use --ipfs-server-multiaddr 192.168.0.1

If you are stuck you may try to compile from source and run cargo test : if that works it means it is just an issue of configuring the docker networking.
Note: cargo test will handle the IPFS daemon; you do not need to run it in the background

@ashlink11
Copy link
Contributor

ashlink11 commented Mar 24, 2022

Hi, I just saw your new comment, and I plan to get to work with those tips!!

By the way, looking ahead, so I can run your cargo test, I have cmake version 3.16.3 and I'm having a hard time updating to > 3.22. I've tried different ways of uninstalling and reinstalling cmake, but well, I haven't figured out the proper way yet. Any help on that front is also much appreciated. I know you provided detailed instructions and I'm sorry they aren't working for me. Here is my machine info:

Welcome to Ubuntu 20.04.4 LTS (GNU/Linux 5.4.0-100-generic x86_64)

I believe we can get through this together! 😄

By the way, fyi, I also ran these commands:

rustup override set 1.59.0
rustup override set nightly-2022-02-23
rustup target add thumbv6m-none-eabi --toolchain nightly-2022-02-23

which yielded a rustup show of:

ashley@testbox:~/substrate-offchain-worker-demo$ rustup show
Default host: x86_64-unknown-linux-gnu
rustup home:  /home/ashley/.rustup

installed toolchains
--------------------

stable-x86_64-unknown-linux-gnu
nightly-2022-02-23-x86_64-unknown-linux-gnu
nightly-x86_64-unknown-linux-gnu (default)
1.59.0-x86_64-unknown-linux-gnu

active toolchain
----------------

1.59.0-x86_64-unknown-linux-gnu (directory override for '/home/ashley/substrate-offchain-worker-demo')
rustc 1.59.0 (9d1b2106e 2022-02-23)

That solved a bunch of errors, but I still have some errors:

error: failed to run custom build command for `pallet-ocw v1.0.0 (/home/ashley/substrate-offchain-worker-demo/pallets/ocw)`

Caused by:
  process didn't exit successfully: `/home/ashley/substrate-offchain-worker-demo/target/debug/build/pallet-ocw-b56bac817699f929/build-script-build` (exit status: 1)
  --- stderr
  Error: Custom { kind: Other, error: "protoc failed: Could not make proto path relative: deps/protos/api_garble/api.proto: No such file or directory\n" }
warning: build failed, waiting for other jobs to finish...
error: build failed

So I'm wondering, perhaps I have the wrong directory organization? Or perhaps this is a cmake error. Perhaps this is all irrelevant. Thanks for explaining the testing more for me!

Thanks for your help!

@n-prat
Copy link
Contributor

n-prat commented Mar 24, 2022

The last error deps/protos/api_garble/api.proto: No such file or directory means you did not clone the repo with --recursive so you have no submodules.
You can fix with something like git submodule update --init --recursive

For CMake yes it is not the cleanest; you basically need to DL the pre-compiled binaries and add them into your PATH.
See for example # prereq: install CMake in https://github.com/Interstellar-Network/api_circuits/blob/wrapper-build-refacto/Dockerfile or directly

@ashlink11
Copy link
Contributor

Thank you. Planning to fix the missing submodules next. Thanks for the dockerfile with additional cmake commands too.

Btw, I checked with ip addr and I see this, which is a bit inconclusive for me but hopefully good with 6: ... inet 172.X.X.X format.

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp7s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 04:42:1a:a8:18:74 brd ff:ff:ff:ff:ff:ff
    inet 142.132.197.242/32 scope global enp7s0
       valid_lft forever preferred_lft forever
    inet6 2a01:4f8:261:402a::2/64 scope global 
       valid_lft forever preferred_lft forever
    inet6 fe80::642:1aff:fea8:1874/64 scope link 
       valid_lft forever preferred_lft forever
3: br-34fbda5bf28a: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:dc:8d:68:89 brd ff:ff:ff:ff:ff:ff
    inet 172.28.0.1/24 brd 172.28.0.255 scope global br-34fbda5bf28a
       valid_lft forever preferred_lft forever
    inet6 fe80::42:dcff:fe8d:6889/64 scope link 
       valid_lft forever preferred_lft forever
4: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:a5:0c:79:c8 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:a5ff:fe0c:79c8/64 scope link 
       valid_lft forever preferred_lft forever
6: br-1c32262cdd57: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:3b:33:b4:16 brd ff:ff:ff:ff:ff:ff
    inet 172.23.0.1/16 brd 172.23.255.255 scope global br-1c32262cdd57
       valid_lft forever preferred_lft forever
1038: vethcc96564@if1037: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether 8a:36:e9:e4:81:04 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::8836:e9ff:fee4:8104/64 scope link 
       valid_lft forever preferred_lft forever
782: br-6c81b06cf292: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:eb:0d:a7:2a brd ff:ff:ff:ff:ff:ff
    inet 192.168.32.1/20 brd 192.168.47.255 scope global br-6c81b06cf292
       valid_lft forever preferred_lft forever
    inet6 fe80::42:ebff:fe0d:a72a/64 scope link 
       valid_lft forever preferred_lft forever
1040: veth906d3ea@if1039: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether 1a:ff:3f:a3:23:33 brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet6 fe80::18ff:3fff:fea3:2333/64 scope link 
       valid_lft forever preferred_lft forever
787: br-89de7a4374b2: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:14:db:ce:6c brd ff:ff:ff:ff:ff:ff
    inet 192.168.48.1/20 brd 192.168.63.255 scope global br-89de7a4374b2
       valid_lft forever preferred_lft forever
    inet6 fe80::42:14ff:fedb:ce6c/64 scope link 
       valid_lft forever preferred_lft forever
792: br-e229354e83a9: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:f2:05:89:6e brd ff:ff:ff:ff:ff:ff
    inet 192.168.64.1/20 brd 192.168.79.255 scope global br-e229354e83a9
       valid_lft forever preferred_lft forever
    inet6 fe80::42:f2ff:fe05:896e/64 scope link 
       valid_lft forever preferred_lft forever
654: br-0fe56d22da8f: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:e3:8f:a8:2d brd ff:ff:ff:ff:ff:ff
    inet 172.25.0.1/16 brd 172.25.255.255 scope global br-0fe56d22da8f
       valid_lft forever preferred_lft forever
    inet6 fe80::42:e3ff:fe8f:a82d/64 scope link 
       valid_lft forever preferred_lft forever

@n-prat
Copy link
Contributor

n-prat commented Mar 24, 2022

That is weird.
But you can ignore that for now.
Once your OCW demo is up and running (with the frontend) you will have an error if it can not communicate with the containers(NOTE: when you try the extrinsincs; simply starting the worker does not use the API)

@ashlink11
Copy link
Contributor

I tried deleting and re-cloning to try to build from source.

I am getting this error:

ashley@testbox:~$ sudo git clone --recursive https://github.com/Interstellar-Network/api_circuits.git
Cloning into 'api_circuits'...
remote: Enumerating objects: 100, done.
remote: Counting objects: 100% (100/100), done.
remote: Compressing objects: 100% (64/64), done.
remote: Total 100 (delta 44), reused 81 (delta 32), pack-reused 0
Receiving objects: 100% (100/100), 58.87 KiB | 7.36 MiB/s, done.
Resolving deltas: 100% (44/44), done.
Submodule 'deps/lib_circuits' (git@github.com:Interstellar-Network/lib_circuits.git) registered for path 'deps/lib_circuits'
Submodule 'deps/internal/protos' (git@github.com:Interstellar-Network/protos.git) registered for path 'deps/protos'
Cloning into '/home/ashley/api_circuits/deps/lib_circuits'...
Warning: Permanently added the ECDSA host key for IP address '140.82.121.3' to the list of known hosts.
git@github.com: Permission denied (publickey).
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.
fatal: clone of 'git@github.com:Interstellar-Network/lib_circuits.git' into submodule path '/home/ashley/api_circuits/deps/lib_circuits' failed
Failed to clone 'deps/lib_circuits'. Retry scheduled
Cloning into '/home/ashley/api_circuits/deps/protos'...
git@github.com: Permission denied (publickey).
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.
fatal: clone of 'git@github.com:Interstellar-Network/protos.git' into submodule path '/home/ashley/api_circuits/deps/protos' failed
Failed to clone 'deps/protos'. Retry scheduled
Cloning into '/home/ashley/api_circuits/deps/lib_circuits'...
git@github.com: Permission denied (publickey).
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.
fatal: clone of 'git@github.com:Interstellar-Network/lib_circuits.git' into submodule path '/home/ashley/api_circuits/deps/lib_circuits' failed
Failed to clone 'deps/lib_circuits' a second time, aborting

Are you familiar with these error messages? I am so sorry for all the trouble. I believe this is because I've ssh'd into my Ubuntu VM and from there, I don't have ssh to GitHub set up and I clone repos with HTTPS. It seems this is an issue? Working through this best I can on my end. Perhaps if I can't get this working on my remote Ubuntu server, you could provide instructions so I could build from source on my MacBook Pro 2019 (Intel chip)? Again, I really appreciate your help.

@n-prat
Copy link
Contributor

n-prat commented Mar 24, 2022

Ah yes, I had never checked the submodules. I guess you indeed need to have git SSH configured.
Maybe doing something like https://stackoverflow.com/a/62615597/5312991 but the other way around would work?

[url "https://github.com/"]
    insteadOf = ssh://git@github.com/

All the repos are public so that should be working, assuming ssh -T git@github.com works.

PS: you should avoid sudo for git clone b/c that probably ignores any key you have set up for your user

@ashlink11
Copy link
Contributor

@nathanprat Makes sense! I'm just barely setting up my new Ubuntu server and see that I need to spend some more time configuring git. I'm wrapping up for the day, and thanks for all the great help! I plan to continue tomorrow.

@ashlink11
Copy link
Contributor

Hi @nathanprat, I'm planning to continue this evaluation on Monday. Hope you have a nice weekend!

@ashlink11
Copy link
Contributor

Good day @nathanprat!

I set up ssh to github on my machine and was able to clone those repos recursively via ssh, thank you.

I'm still having issues with cmake. I'm getting Unpacking finished successfully but when querying for the version, I still get 3.16.3. (see terminal output) Very sorry about this.

ashley@testbox:~$ sudo ./cmake-3.22.3-linux-x86_64.sh --skip-license --prefix=/opt/cmake/
CMake Installer Version: 3.22.3, Copyright (c) Kitware
This is a self-extracting archive.
The archive will be extracted to: /opt/cmake/

Using target directory: /opt/cmake/
Extracting, please wait...

Unpacking finished successfully
ashley@testbox:~$ cmake -version
cmake version 3.16.3

CMake suite maintained and supported by Kitware (kitware.com/cmake).

I've also tried many ways I could find online to update this. I feel quite blocked and might have to discuss this with my team to see how we can proceed. I am not able to cargo build and I get cmake compilation errors.

In the meantime, do you happen to have testing instructions for Mac? Do you think it's possible to test with Mac?

@n-prat
Copy link
Contributor

n-prat commented Mar 28, 2022 via email

@ashlink11
Copy link
Contributor

You're correct. I ran which cmake and got /usr/bin/cmake.

Sorry I'm relatively new to using Linux/GNU again. I tried the command export PATH="$HOME/opt/cmake/bin:$PATH", but that didn't work. Do you have another suggestion for adding it to my path?

@ashlink11
Copy link
Contributor

ashlink11 commented Mar 28, 2022

I am able to run the ipfs daemon, your two docker nodes and then able to run your substrate chain! It's nice to see it producing blocks here locally on my ubuntu VM. I need to set up port forwarding properly via VSCode to my mac so I can run a substrate frontend at http://localhost:5001/ -- unless you know, I think I need to check with my team tomorrow. Thanks again for your help!

@n-prat
Copy link
Contributor

n-prat commented Mar 29, 2022

The export PATH="$HOME/opt/cmake/bin:$PATH" should be without $HOME eg export PATH="/opt/cmake/bin:$PATH" because you are installing directly in /opt/ not under your $HOME. [You may need to restart VScode if you put that in your .bashrc].
Note that you should probably remove the system's CMake(eg apt-get remove cmake) b/c that may be confusing(and there in no point in keeping the old version around).

When you have the frontend + OCW running you should see "Nothing to do" messages in the OCW's console for both the OCW pallets.
You need to get things started by using the extrinsics cf this
If you indeed have issues with the port forwarding instead of a success message, you will see an error in the console.

@ashlink11
Copy link
Contributor

export PATH="/opt/cmake/bin:$PATH" worked! Now I have the proper cmake version, thanks!

I got the ports properly forwarded and the substrate template running locally and I can interact with your substrate-offchain-worker-demo blockchain!

Screen Shot 2022-03-29 at 4 29 54 PM

I am now stuck on step 1 of your demo. I know I'm relatively new to Linux and Polkadot ecosystem, but still I find your testing guide assumes a lot of steps and it's made it extremely time intensive to evaluate your project.

What I did for step 1 was first try just creating the adder.v file in the demo repo, then the curl command, which didn't work, so then I tried ipfs add adder.v, then curl, but I still get curl: (26) Failed to open/read local data from file/application.

At this point, I've already had to ask for so many clarifications, I'm really sorry, but for both our time, I'm going to have to ask you to take some time and make your testing guide truly step-by-step per our grant requirements, perhaps with a video or screenshots or more text, whatever is easiest for you. Your testing guide is already very in-depth and very detailed, which I very much appreciate, it's just not quite enough details for me to effectively follow. I'm so sorry about this once again and really hoping you understand. I appreciate the opportunity to test your very interesting technology, and I only wish to understand how it works. Thanks for all the massive time, effort and heart you've put into this project and this ecosystem. I'm looking forward to continue testing and finish your evaluation asap!

@n-prat
Copy link
Contributor

n-prat commented Mar 30, 2022

There was a little typo curl -X POST -F file=@/adder.v "http://127.0.0.1:5001/api/v0/add?progress=true" when it should have been curl -X POST -F file=@adder.v "http://127.0.0.1:5001/api/v0/add?progress=true"(ie an extra /).

Beyond that https://book.interstellar.gg/M1_demo_tutorial.html#step-1-add-the-masterconfig-verilogfilev-in-ipfs should work.
I will ask someone to do another pass on the testing guide, but it should get you most of the way for testing the whole milestone even in current state.

@n-prat
Copy link
Contributor

n-prat commented Mar 30, 2022

but still I find your testing guide assumes a lot of steps and it's made it extremely time intensive to evaluate your project.

I am sorry to here that, but the docker version of the evaluation should not take much time at all.
What steps are missing?
Indeed, we may have missed things because being a dev it is easy to write a guide in a developer-centric approach and miss steps for non-developers.
I would gladly make changes if you can point me in the right direction.

Overview of the "Docker testing"

Global Prerequisites

NOTE: outside the scope of evaluating our own milestone; this is general

  • setup the rust development environment
  • setup docker

Prerequisites

Testing

I will grant you than setting up what is needed to compile from sources is a bit of a pain, but that is not required for the Docker part

@ashlink11
Copy link
Contributor

@nathanprat thanks for updating the curl command!

I still have remaining questions:
Am I supposed to ipfs add adder.v?
Where is the verilogcid? Can you show me?

In general, it would be helpful if you would follow the testing guide yourself to double-check for bugs beforehand, or you could perhaps give a screenshot of what the expected outcome is after each step so I could verify I did each step correctly.

There are two OCW pallets, but could you show a video or screenshot of the terminal of how it's supposed to look?

Looking at step 2, I wish there was a lot more explanation because I don't really know what to actually put in the terminal, which terminal window (because I have so many running), which directory I should be in, etc.

It seems there are so many steps in between the steps you have in the tutorial. Maybe you could slow things down for me and take it step by step? A screencast video of you going through the tutorial would maybe be really nice! Thank you!

@n-prat
Copy link
Contributor

n-prat commented Mar 30, 2022

Am I supposed to ipfs add adder.v?
Where is the verilogcid? Can you show me?

We have tested only with curl -X POST -F file=@adder.v "http://127.0.0.1:5001/api/v0/add?progress=true" but you can probably do the same thing with the IPFS cli.
The curl should return something like

{"Name":"adder.v","Bytes":527}
{"Name":"adder.v","Hash":"QmYAFySLrUXwf4wVb7QGMxA7nXAoueXtQCYpyReFp5NKsx","Size":"538"}

and you have to copy-paste Hash for the next step(ie the extrinsic at step 2)

There are two OCW pallets, but could you show a video or screenshot of the terminal of how it's supposed to look?

There are already screenshots for the OCW steps(2,3)

Looking at step 2, I wish there was a lot more explanation because I don't really know what to actually put in the terminal, which terminal window (because I have so many running), which directory I should be in, etc.

I will grant you that is not necessarily obvious what is an input and what is an output in the demo guide.
Thank you for your input, we will work on that.

It seems there are so many steps in between the steps you have in the tutorial.

It was missing a branch for git clone but I really do not think there are missing steps.
I will do a pass right now just to be sure.

@n-prat
Copy link
Contributor

n-prat commented Mar 30, 2022

Quick pass done.
There were indeed a few fixes to make, sorry about that.

Launch a generic Substrate Fromt-end

-> directly link to https://github.com/substrate-developer-hub/substrate-front-end-template#using-the-template instead

Launch substrate demo chain with OCW

Missing --branch and --recursive(and cd)

git clone --branch=interstellar --recursive git@github.com:Interstellar-Network/subs
trate-offchain-worker-demo.git
cd substrate-offchain-worker-demo/

Step 1: add the master/config verilogfile.v in IPFS

Clarify:

create a file `adder.v` eg
- use your editor of choice eg `code adder.v` or `nano adder.v` etc
- copy paste the content below

@ashlink11
Copy link
Contributor

@nathanprat Thanks for reviewing! Resuming testing now.

@ashlink11
Copy link
Contributor

Hi @nathanprat, I was happy to successfully POST the adder.v and receive a verilogCid, thanks, so I was able to complete step 1.

On step 2, (specifically 2.1), you didn't include instructions on whether to submit the transaction 'signed', 'unsigned', or 'sudo', which was troubling, since we require step-by-step instructions, but I decided to try first unsigned (which failed), then sudo resulting in this:

Screen Shot 2022-03-30 at 11 32 41 AM

Then you say

GCF will generate the related skcd logical circuit file, add it in IPFS and send back its hash i.e skcdCid to the ocwExample pallet.

so I assume the skcdCid is the finalized block hash, since this is the only hash shown in your instructions and since you said the instructions are thorough and complete, so therefore, in my case, 0x9f4777ff5b2caf0837a9046e74da3d3884f6598491cfeed24b9e80c097d1af2b should be now located in my OCW logs.

I found it in my OCW log here:
Screen Shot 2022-03-30 at 11 38 20 AM

But this doesn't match the output of your step 2.2, and once again I'm blocked and had to spend a lot of time guessing how to set up a demo.

Once again, your instructions do not seem to be accurate or thorough for my level of understanding. I'm really sorry, but I plan to work on other evaluations at this time instead. Please update your testing guide so you can teach how to use your technology, specifically how to set up a development/testing environment and go step by step instead of skipping steps/assuming prior knowledge.

I find myself very lost and just trying to execute steps without understanding what the purpose of the technology is or why I'm doing what I'm doing. I'm humbly asking again, please go slowly, step by step. Please be clear and thorough. Please teach what's going on. Please make your technology accessible to people other than only experts in your specific technology.

I'm really sorry I'm not an expert in your specific tech stack, I wish I was, but I really need help understanding what's going on. I really wish to understand your technology because I find it very interesting, I loved learning logic circuits in school and I am a seasoned developer, just not in your technologies specifically. I think your concepts about cryptography, TEE, OCWs and wallet are fascinating. Thank you and sorry for the trouble.

@nashjl
Copy link
Contributor Author

nashjl commented Mar 30, 2022

Hi Asley,

Sorry we did not mention explicitly in the tutorial that we need a signed extrinsic/transaction, but this information is in the API documentation https://book.interstellar.gg/GCF_API.html#flowchart-and--substrate-gcf-pallets
Thanks to you. There is now a note about this in the tutorial too.

The purpose of the demo is to show that our APIs are working properly and as expected with:

  • a substrate framework
  • generic circuits (like the adder.v example or any VHDL file)

I understand that you are a bit frustrated because you are not familiar with both our technology and substrate framework.
However. you now are at the end, use a signed transaction for the last two steps. It should take less than 5mn.

Anyway, thanks a lot for your interest and your kind words about our technology. Really appreciated. We designed it with love & passion to have an impact on both the wallet's security and ease of use (yes, not kidding 😉).

@ashlink11
Copy link
Contributor

@nashjl Thank you!! Big sigh of relief reading your nice comment and thanks for all the help so far 😅 Working on it ASAP.

@nashjl
Copy link
Contributor Author

nashjl commented Apr 5, 2022

Hi Ashley (@cruikshankss)

Hope you are doing well.
Actually, we are very close to deliver M2 and it’s important for us to have M1 finalized at that time.

Could you please let us know when you plan to validate M1 and if you need any help.

Kind regards

@ashlink11
Copy link
Contributor

Hello @nashjl,

Thanks for the update. I'm very happy to hear you are close to delivering M2. I plan to finish your evaluation in the next 24 hours. I will let you know if I need any help, thanks!

@nashjl
Copy link
Contributor Author

nashjl commented Apr 6, 2022

Hi Asley,

Thanks a lot for your message. Hope, you will keep in mind that M1 target audience for this demo are mostly developers familiar with VHDL, C/C++, Rust, and Substrate. Mostly people already familiar with cryptography, garbled circuits, multi-party computation that aim at adapting this low-level layer for their own purpose or potential contributors.

Later. (Start with M3) we will target a larger audience of Rust/Substrate developers that aim at using our Transaction Validation Protocol. Thanks to our future friendly TVP APIs that will hidden the low-level complexity of GCF and GC production.

@ashlink11
Copy link
Contributor

@nashjl Still working on it this evening. Thanks for the additional info!

@ashlink11
Copy link
Contributor

@nashjl, I've just accepted your milestone 1 and here is my evaluation. I've forwarded your invoice to the invoices team. Thank you again!

@nashjl
Copy link
Contributor Author

nashjl commented Apr 7, 2022

Hi Asley (@cruikshankss),

Thanks a lot!!!
Regarding the invoice, did you forward the updated invoice-1, submitted on March 31 that include our VAT ID and registration number? The correct File name is `W3F-Interstellar-Invoice-1 with VATID.pdf``

@n-prat
Copy link
Contributor

n-prat commented Apr 7, 2022

Awesome! Thanks for your time!

@ashlink11
Copy link
Contributor

@nashjl, yes, I saw there were two M1 invoice submissions. I looked at them both and forwarded the most recent one with your VAT ID and made note of this to the invoices team as well. Thanks for double-checking!

@ashlink11 ashlink11 merged commit 1a97b5f into w3f:master Apr 7, 2022
@github-actions
Copy link

github-actions bot commented Apr 7, 2022

Congratulations on completing the first milestone of this grant! As part of the Grants Program, we want to help grant recipients acknowledge their grants publicly. To that end, we’ve created a badge for projects that successfully deliver their first milestone. Note that it must only be used within the context of the delivered work, so please do not display it on your team or project's homepage unless accompanied by a short description of the grant.

Furthermore, you're now welcome to announce the grant publicly. Please remember to observe the foundation’s guidelines in doing so. In case you haven't done so yet, you may also reach out to grantsPR@web3.foundation for feedback on your announcement and cross-promotion.

Thank you for your contribution and good luck with the remaining milestones, if any! As usual, please let us know if you run into any delays by leaving a comment on the application PR, or directly submitting an amendment.

@nashjl
Copy link
Contributor Author

nashjl commented Apr 7, 2022

Thank you very much for all Ashley !
We have appreciated the exchange, your interest in our overall technology and your reactivity. It was helpful for us!

@ashlink11
Copy link
Contributor

For the record, additional invoicing discussion occurred here on the M1 evaluation conversation thread: #417

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants