Replies: 2 comments 3 replies
-
That's correct! Unless the user is currently drawing something, the dry ink canvas contains what's currently visible on the screen.
I would suggest rendering For example, to render the visible content of the editor onto a canvas: Exampleimport { Editor, CanvasRenderer } from 'js-draw';
const editor = new Editor(document.body);
// Adds the defualt toolbar
const toolbar = editor.addToolbar();
// Loads from SVG data
await editor.loadFromSVG(`
<svg
viewBox="0 0 500 500"
width="500" height="500"
version="1.1" baseProfile="full" xmlns="http://www.w3.org/2000/svg"
>
<path
d="M500,500L500,0L0,0L0,500L500,500"
fill="#aaa"
class="js-draw-image-background"
/>
<text
style="transform: matrix(1, 0, 0, 1, 57, 192); font-family: serif; font-size: 32px; fill: #111;"
>Testing...</text>
</svg>
`);
toolbar.addSaveButton(() => {
// Given some editor.
// Set up the canvas to be drawn onto.
const canvas = document.createElement('canvas');
const ctx = canvas.getContext('2d');
// Ensure that the canvas can fit the entire rendering
const viewport = editor.viewport;
canvas.width = viewport.getScreenRectSize().x;
canvas.height = viewport.getScreenRectSize().y;
// Render editor.image onto the renderer
const renderer = new CanvasRenderer(ctx, viewport);
editor.image.render(renderer, viewport);
// Add the rendered canvas to the document.
document.body.appendChild(canvas);
// To make the canvas easier to see, hide the editor.
editor.remove();
}); |
Beta Was this translation helpful? Give feedback.
-
Thank you, Henry! Then the only part that I'd be a little unclear on is getting the filled content drawn back into the actual "image" (that is, js-draw's view of the image). It looks like with strokes that happens by doing I see that there is a listener on But I don't think I've seen js-draw yet have any sort of handling for a structured representation of image components. That is, once something is drawn, it's on the canvas and that's it -- there doesn't seem to be a remembered z-index or a way to reorder or move around layers of image components, which is what I think I'm expecting seeing this tree-based representation. Assuming that's correct (once the ink is dried, it's just a canvas of pixels) then I'm thinking maybe I can take the output of q-floodfill and just somehow draw it onto the root ImageNode? But I'm not clear on that. Maybe I should just overwrite t he dryink directly? |
Beta Was this translation helpful? Give feedback.
-
Thanks for your recent help on an unrelated question! I'm trying to build a flood fill tool (aka a "paint bucket"), such that you click a point in the drawing area and its bounds are identified and filled in with the pen color.
There is a JS algorithm called q-floodfill that looks highly performant and I thought might work well for this. (Although it has not been updated in some time -- which maybe is fine for a low-level algorithm implementation?)
Anyway, that particular algorithm takes an ImageData object, accessible from an HTML Canvas element.
I am not sure what the best way would be to generate ImageData (assuming this is a reasonable method) from the js-draw editor. I happened upon this, however:
I'm not fully confident in my understanding of the DryInkRenderer vs WetInk, but I think that "WetInk" is used while the user is interacting with the drawing area (e.g. holding down a click and dragging to make a path), and as soon as that process concludes (i.e. they release the click) that "wet" ink is changed to "dry" ink. Is that right?
So I was thinking I would extend the base
PenTool.onPointerUp()
method to check the pixel the point the cursor was on, generate an array of data from the canvas, convert that toImageData
, feed it to q-floodfill, and somehow replace the canvas with that new data. But I'm not sure if I'm barking up the wrong tree here.Any insight would be appreciated!
Beta Was this translation helpful? Give feedback.
All reactions