Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Change DAG read offsets every iteration to prevent ASIC optimization #13

Closed
wants to merge 1 commit into from

Conversation

mbevand
Copy link

@mbevand mbevand commented Nov 15, 2018

No description provided.

ifdefelse added a commit that referenced this pull request Nov 16, 2018
Based on suggestion from PR #13
@ifdefelse
Copy link
Owner

ifdefelse commented Nov 16, 2018

This is a unique ASIC structure that we had not analyzed before. I don't think that it's a practical system, but it is plausible.

In both Ethash and the 0.9.0 ProgPoW spec a single lane will only ever access the same word(s) within the data that's loaded from the DAG. This means the DAG can be split into 32 (Ethash) or 16 (ProgPoW) chunks that each reside on a single chip. Each chip would require ~128mb (Ethash) or ~256mb (ProgPoW) of eDRAM for the system to hold a 4GB DAG and be viable for a few years. For reference the IBM Power9 has 120 mb of eDRAM, so this is plausible.

In both algorithms every load address comes from a different lane, so the address needs to be broadcast across lanes. The most reasonable structure would be a central coordinator chip that broadcasts address data to all the per-lane chips. A DAG load across all the chips that consumes 128 bytes (Ethash) or 256 bytes (ProgPoW) requires a 4 byte address to be broadcast.

For ProgPoW a GPU + DRAM with 256 GB/sec of memory bandwidth could be replaced at equal perf with 16 eDRAM chips and 1 central coordinator in a system with 4GB/sec of broadcast bandwidth. Without a lot more analysis it's unclear what the overall performance, power, or cost of this multi-chip system would be.

Since it's easy enough to break this architecture we've decided to update the spec. The 0.9.1 version XORs the loop iteration with the lane_id when accessing within the loaded DAG element. Your suggestion to ADD the two means lane data could be rotated across a single-directional ring bus while the DAG data remained in place. By doing an XOR there would need to be a full mesh network or high bandwidth switch so any chip could shuffle data to any other chip, which almost certainly makes the system impractical.

Some quick benchmarking shows this makes no performance difference on either AMD or NVidia hardware.

@ifdefelse ifdefelse closed this Nov 20, 2018
hackmod pushed a commit to EthersocialNetwork/ethminer-ProgPOW that referenced this pull request Nov 22, 2018
Based on suggestion from PR ifdefelse#13
hackmod pushed a commit to EthersocialNetwork/ethminer-ProgPOW that referenced this pull request Dec 9, 2018
hackmod pushed a commit to EthersocialNetwork/ethminer-ProgPOW that referenced this pull request Dec 9, 2018
Based on suggestion from PR ifdefelse#13
hackmod pushed a commit to EthersocialNetwork/ethminer-ProgPOW that referenced this pull request Dec 14, 2018
Based on suggestion from PR ifdefelse#13
@solardiz
Copy link
Contributor

solardiz commented Apr 4, 2019

Just to have this thought written down where I think it belongs:

For the fix introduced in ProgPoW 0.9.1+ and described above to be effective it's crucial that each lane's mix state be no smaller than the lane's DAG reads per loop iteration. Otherwise, inter-chip transfer of lanes' mix state between loop iterations would allow for the original attack at a fraction of the cost of full inter-chip DAG reads. Luckily and quite obviously, this condition holds, with quite some margin: we have 32 mix registers (128 bytes) but only 4 DAG reads (16 bytes) per lane.

Maybe this should also be somewhere in ProgPoW documentation, as part of design rationale and constraints on parameter values.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants