This is a single page with links to a bunch of large CSV files. This is a nice example of a web-scrape-to-data-crunching workflow, in either Python or plain old Bash+grep.
Last updated: 2020-09-08
Mirror page:
https://wgetsnaps.github.io/propublica-congress-expenditures/
Original page:
https://projects.propublica.org/represent/expenditures
See wgetsnap.sh to see the code.
https://projects.propublica.org/congress/assets/staffers/2020Q1-house-disburse-detail.csv https://projects.propublica.org/congress/assets/staffers/2016Q4-house-disburse-detail.csv