-
Notifications
You must be signed in to change notification settings - Fork 80
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Attempting to copy 100,000 rows only copies 50,000, does not throw an error. #4977
Comments
Reproducible in just the JSAPI. Run this in the browser console on the JSAPI page (http://localhost:10000/jsapi/) // REPLACE THE TOKEN WITH YOUR PSK
var token = 'iris';
var dh = (await import('./dh-core.js')).default
var c = new dh.CoreClient(window.location.protocol + "//" + window.location.host)
await c.login({type: 'io.deephaven.authentication.psk.PskAuthenticationHandler', token})
console.log('Logged in...')
var connection = await c.getAsIdeConnection()
var ide = await connection.startSession('python')
console.log('Session started...')
// Create a table with 100000 rows
var code = `
from deephaven import empty_table
t = empty_table(100000).update(["X=i", "Y=i*i"])
`;
var result = await ide.runCode(code);
var updates = [...result.changes.created, ...result.changes.updated]
var t = await connection.getObject(updates[0])
console.log('Got table t of size', t.size)
var subscription = t.setViewport(0, 100)
var rangeSet = dh.RangeSet.ofRange(0, t.size);
console.log('Fetching snapshot with rangeSet', rangeSet);
var snapshot = await subscription.snapshot(rangeSet, t.columns)
console.log('Snapshot is ', snapshot);
console.log('Snapshot added size is', snapshot.added.size, 'fullIndex', snapshot.fullIndex.size) |
It seems to cap at 50,000, even if I build a table with more columns, or only request 60,000 rows. |
Should patch this in 0.30.5 for Enterprise/Vermilion as well. |
This seems to be tied to JS API is wrong here, #188 will enable it to handle more than one separate snapshot before having the initial amount of data. We should be able to detect this case from the JS API and emit an error instead of claiming success with partial data. Raising the min and max snapshot cell count should make it possible for the server and client to get along here. Another solution would/should be the web UI asking for smaller snapshots - very wide rows will end up eventually breaking with the above limits raised. This is what the csv export does, even with "smaller" exports. |
@niloc132 can we please get a PR to throw an error if the snapshot is too big? This is marked as a blocker for Enterprise. |
Description
When you try to copy 100,000 rows, it only copies 50,000.
Steps to reproduce
Expected results
3. 100,000 rows pasted
Actual results
3. Only 50,000 rows are pasted
Versions
Engine Version: 0.32.0-SNAPSHOT
Web UI Version: 0.57.1
Java Version: 11.0.21
Barrage Version: 0.6.0
The text was updated successfully, but these errors were encountered: