-
Notifications
You must be signed in to change notification settings - Fork 41
/
ROADMAP
105 lines (81 loc) · 4.91 KB
/
ROADMAP
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
v0.5 - Timelapses + On-Deck Jobs - ETA: August 2013
BACKGROUND WORKER
* report OS / uname -a / memory size / cpu speed for each token - include in devices scan.
* extend botqueue to support generic jobs with json payload
* bot/worker should have configuration to specify what jobs it can process.
* job types: fabrication, toolpath generation, timelapse creation, 3d rendering, image processing, stl parsing
* main status: available / taken / qa / pass / fail / canceled
* sub-status (job specific)
* downloading / working / uploading / etc.
* admin area to keep an eye on jobs status.
* admin area to view active workers.
* move job-specific execution to separate classes in jobs subfolder
CLOUD SLICING
* Create on-deck system to allow jobs to be pre-sliced by either the pi or by workers in the cloud
* Every time a certain action is triggered, the on-deck function will look at all bots, jobs, queues of a user and calculate all the on-deck jobs.
* The algorithm should set the on-deck job for each bot, generate a slice job if required, and invalidate any slice jobs that are no longer relevant.
* Modify bumblebee to download (and possibly slice) on-deck jobs for quick printing in the background.
* this should probably be a single thread that loops through each bot and gets on-deck jobs and processes the download/slicing of them.
* Extend API to allow admin-authorized clients to grab and slice any available jobs.
* Add a config area on app token to specify what kind of token it is (bot, worker, slicer, etc)
* Setup a server with an instance of bumblebee that runs on ec2 and processes jobs.
TIMELAPSE GENERATION
* use the background worker system to generate timelapses of prints.
* automatically create a timelapse at the end of a successful print - watch your object grow
* timelapse interval should be calculated to last an exact time... 10 seconds or so.
* 30 frames/second * 10 seconds = 300 frames.
* take list of bots and / 300, then loop through them and take every (total_count / 300)th frame.
* aka index % (total_count / 30*10) == 0
* keep all images from webcam on error jobs - forensic viewing of failed jobs.
* automatically delete after 1 day.
* ffmpeg + x264 install: http://ffmpeg.org/trac/ffmpeg/wiki/UbuntuCompilationGuide
* ffmpeg timelapse from images: http://tatica.org/en/2013/02/15/timelapse/
* more timelapse commands: http://poohbot.com/2009/09/25/ffmpeg-for-time-lapse-sets-of-images-and-even-archiving/
* potential video player: http://videojs.com/
* delete all saved images after generating timelapse
* BONUS: synchronize temp readings + video - then synchronize images w/ temperatures on bot view page
BONUS FEATURES
* WEB - Single unified queue view w/ auto-update
* combine queue page into single list of all open jobs w/ tabs to limit based on job status
* WEB - create page to show live shots from currently printing public bots or just shots from completed jobs
* API - Add callback url support for web-based apps
* WEB - add diff of slicer configs to slicejob page (current vs snapshot)
* BUMBLEBEE - Better / faster shutdown
------------------------------------------------------------------------------------
v0.6 - Websockets
* API - Websockets server
* define events to pass to clients
* subscribe to bots, jobs, or user?
* use autobahn or tornado for python side.
* CLIENT - support for websockets for realtime comms w/ server
* WEB - websocket client w/ transparent high bandwidth connection to local machines.
* WEB - control panel for controlling various machine parameters
------------------------------------------------------------------------------------
Long term wants:
* WEB - filament spool info (it keeps track of how big each machine spool is/when a new one is installed and appoximates how much is left based on volume info in gcode)
* PREREQUISITE: worker bots to parse STL files for stats like volume, bounds, etc.
* add filament_volume field to slicejob or job
* WEB - print grouping using slic3r
* modify jobs to add allow_grouping file
* look at job grabbing to allow multiple jobs to be grabbed
* look at bumblebee to allow multiple jobs to be grabbed
* add option to allow job to be grouped into a single print
* modify code to use slic3r --merge to create build plate.
* create high-level job group to hold currently running jobs?
* WEB - Public queue support
* WEB - Team support to allow other users to control bots or queues.
* WEB - full page statistics for bot / queue with graphs
* WEB - 100% working Amazon bootup script.
------------------------------------------------------------------------------------
Crazy ideas:
* BUMBLEBEE - GUI App?
High-level views:
Dashboard: all bots w/ current status and active jobs.
Add Bot: configure local bot settings (drivers, name, etc)
Bot Detail:
* all info available on this bot
* current print status
* pause print
* cancel print
* toggle bot status: online/offline/fixed/broken
* Use python+webkit for UI