Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Infinite loop when called on a Sequelize object #46

Closed
MatthewHerbst opened this issue Sep 25, 2019 · 16 comments
Closed

Infinite loop when called on a Sequelize object #46

MatthewHerbst opened this issue Sep 25, 2019 · 16 comments

Comments

@MatthewHerbst
Copy link

MatthewHerbst commented Sep 25, 2019

In ladjs/superagent@2e5d6fd the Superagent library changed from using JSON.stringify to using fast-safe-stringify for it's default object serializer when sending JSON data.

We use Sequelize for our ORM. Sequelize uses their own custom object structure, which supports a .toJSON call.

When Superagent was using JSON.stringify, that called the .toJSON method on our Sequelize objects, properly converting them to JSON. However, with the change to fast-safe-stringify, .toJSON is no longer called, even if it exists.

I'm not sure how to provide a copy of the object in question since getting its JSON representation is unlikely to be helpful, since all the Sequelize specifics are stripped from that.

Is there a reason this library does not call .toJSON if it exists?

@MatthewHerbst
Copy link
Author

Note: the infinite loop we are experiencing does not result in a stack overflow or other sort of error. Rather, it seems the algorithm is just stuck, spinning forever (our server's CPU usage went from <1% to 100% basically immediately, though memory was seemingly not impacted).

@davidmarkclements
Copy link
Owner

  const fixture = {
    x: {
      name: 'a',
      toJSON() {
        return {name: 'b'}
      }
    }
  }
  console.log(fss(fixture))

will output: {"x":{"name":"b"}}

so it looks like toJSON is called in the main case, perhaps you've hit an edge case?

Can you please provide a reproducible case?

@MatthewHerbst
Copy link
Author

It's proving very difficult for me to replicate the error outside of our environment since we cannot easily port over the Sequelize instance object. I'll keep working on it and try to get you a working example soon. Thanks for the response.

@BridgeAR
Copy link
Collaborator

BridgeAR commented Oct 2, 2019

@MatthewHerbst could you inspect the output with util.inspect()? It might be possible to see something there that would help debugging this.

It sounds like the toJSON return value is circular and that does not work with this implementation. The toJSON value would be replaced with '[Circular]' before passing it to JSON.stringify and then set back to the original value. That way it would not be called. It should work properly with https://github.com/BridgeAR/safe-stable-stringify.

@markablov
Copy link

I also met similar error. It's not infinite loop, but just very long computation (it could be minutes). We were using regular version of fast-safe-stringify, not stable version, so it had no code to call toJSON.

That long computation was caused by huge buffers, that could be inside Sequelize instance (could be several connections with read/write buffers).

For instance, it was millions of decirc calls.

@davidmarkclements
Copy link
Owner

ok that's a very interesting case. decirc doesn't even need to traverse buffer objects (at least not the indices). @markablov are you able provide a test for this? if so I'll prioritize this

@markablov
Copy link

markablov commented Dec 18, 2019

@davidmarkclements sure.

Code to reproduce:

'use strict';

const Sequelize = require('sequelize');
const stringify = require('fast-safe-stringify');

const db = new Sequelize(
  'sequelize_test',
  'postgres',
  '1',
  {
    host: 'localhost',
    dialect: 'postgres',
    logging: false,
    operatorsAliases: false
  }
);

const users = db.define('users', {
  id: {
    type: Sequelize.INTEGER,
    autoIncrement: true,
    primaryKey: true
  }
}, { tableName: 'users', timestamps: false });

async function test() {
  const user = await users.findOne({ where: { id: 1 } });
  stringify(user);
}

test()
  .catch(err => console.log('ERROR', err))
  .then(() => console.log('DONE!'))
  .then(() => db.close());

I also have added logging for traverse:

function decirc (val, k, stack, parent) {
  console.log(' '.repeat(stack.length) + '> ' + k);

Sample log is attached.
traversedProps.txt

buffer props are real Buffer objects:
user._modelOptions.sequelize.connectionManager.pool._availableObjects._list.head.data.obj.connection.writer.buffer instanceof Buffer === true

@stephenh
Copy link

Fwiw I just got hit by this using pino, which uses fast-safe-stringify as a fallback for anytime JSON.stringify fails (i.e. on circular data structures).

Normally everything is fine, but on sufficiently large/complex data structures (which JSON.stringify immediately detected as circular), fast-safe-stringify pegged the CPU and seemed locked up (a Chrome cpu profile shows decirc recursively calling itself basically forever).

I tried a few times to divine what "sufficiently large/complex" meant, but wasn't able to reproduce it outside of the production case (which does reproduce 100% of the time).

@stephenh
Copy link

This would be a large breaking change, but I generally wonder if an "actually safe" approach would be to stop at any value that was not: a) a primitive, b) an object literal (per is-plain-object) or c) has a toJSON method.

Granted, the built-in JSON.stringify behavior does not stop on "this is an object with a custom prototype that does not implement toJSON", but perhaps that is what the "safe" in this project's fast-safe-stringify could denote, i.e. less you're not going to get 100% the same behavior as JSON.stringify, but that's the point, this behavior is safer.

@davidmarkclements
Copy link
Owner

Isn't that what we do already? The main meaning of "safe" is circular refs are handled

@stephenh
Copy link

I mean "safe" can mean whatever we/you want it to mean :-).

I understand that currently "safe" means handles "circular refs", but I'm musing "safe" could also mean "do not stringify non-class/non-toJSON/non-primitive values" (i.e. things like class ConnectionPool or class OrmInternalImplementationDetail that don't generally expect to be put on-the-wire).

Primarily/selfishly I'm proposing this because I think it would very likely avoid the degenerate "stringify thousands of things in something that really looks like an infinite loop" behavior that Mark and I somehow triggered.

And granted, it's a large behavior change, mostly useful for this pino/logging case of "the application programmer put something in a log statement that they didn't mean to super-recursively put everything 'on the wire' to stdout", so probably needs to be addressed upstream in pino.

@davidmarkclements
Copy link
Owner

davidmarkclements commented May 24, 2020

@davidmarkclements sure.

Code to reproduce:

'use strict';

const Sequelize = require('sequelize');
const stringify = require('fast-safe-stringify');

const db = new Sequelize(
  'sequelize_test',
  'postgres',
  '1',
  {
    host: 'localhost',
    dialect: 'postgres',
    logging: false,
    operatorsAliases: false
  }
);

const users = db.define('users', {
  id: {
    type: Sequelize.INTEGER,
    autoIncrement: true,
    primaryKey: true
  }
}, { tableName: 'users', timestamps: false });

async function test() {
  const user = await users.findOne({ where: { id: 1 } });
  stringify(user);
}

test()
  .catch(err => console.log('ERROR', err))
  .then(() => console.log('DONE!'))
  .then(() => db.close());

I also have added logging for traverse:

function decirc (val, k, stack, parent) {
  console.log(' '.repeat(stack.length) + '> ' + k);

Sample log is attached.
traversedProps.txt

buffer props are real Buffer objects:
user._modelOptions.sequelize.connectionManager.pool._availableObjects._list.head.data.obj.connection.writer.buffer instanceof Buffer === true

I've tried to reproduce but you have data in a table that you haven't provided, and since the object being serialized is based on the data returned from sequelize, how can I reproduce without that data?

I'm eager to solve this, if any one can provide a reproducible case I'll look into it. Preferably, the reproducible case should not involve databases, or third-party deps if possible.

@stephenh
Copy link

@davidmarkclements yeah understood on needing an easy repro. Fwiw I'm setting breakpoints locally and wondering if its not really a bug, but the algorithm being something like N^2 or something.

Right now my arr array (the top-level global) has ~1400 elements in it, and the stack has ~203 elements in it.

I don't have a ton of stack frames on the stack, maybe few hundred:

image

And it's not recursing truly infinitely b/c that would eventually blow the stack. It seems to be chugging away, just very inefficiently. Will keep poking at it.

@stephenh
Copy link

stephenh commented May 24, 2020

Okay, I need to move on for now, but here is where I got in debugging:

image

I set a conditional breakpoint for when the stack hits 100 length. A few highlights from the image:

  • The 6th stack entry, Array(407) is the 407 domain objects I'm saving to the database
  • The 7th stack entry, StageDefinition is one of our domain objects
  • The 8th stack entry is StageDefinition's collection pointing to N UnitDefinition domain objects
  • The 9th stack entry is the array of those 7 UnitDefinition objects
  • ...repeat this pattern fro the UnitDefinition having N ActivityDefinitions having N TaskDefinitions having N TaskDefinitionPredecessor dependencies.

(Basically our domain model is a highly-inter-connected graph of a "project with a bunch of tasks, various levels/groupings of tasks (stage/unit/activity), and then dependencies (predecessors/successors) between the tasks. Obviously they become individuals rows in the database, but while in memory our ORM basically graph-izes all the references (intelligently/lazily/no+N+1-ily) so we can easily do things like critical path calculation, which is essentially walking a graph.)

As I continue looking down the stack entries, everything "looks right", decirc is just continually finding "oh I haven't seen that yet" references...although they are really all part of that original Array(407), each one just hasn't been marked/added to the stack proper yet (i.e. as an explicit element that we know about-but-have-not-visited yet) because we're not 'done' with each one).

I.e. even though the 6th stack entry of Array(407) has all of the instances as implicitly "already seen but not processed", we process the 1st one (...and find 406 others), then do the 2nd one (...and find 405 others), then do the 3rd one (...and find 403 others), where each of these "...and find XXX others" is not actually linear, but itself recursing through "seen but not yet processed" values.

So, could we somehow pre-tell decirc "yes we know about all 407 of these, we are getting to them, but don't recurse into them if you find them again".

...i.e. here:

    if (Array.isArray(val)) {
      for (i = 0; i < val.length; i++) {
        // put all the vals in stack first, before recursing
        decirc(val[i], i, stack, val)
      }
    } else {

Put every element in val directly on the stack before calling decirc because otherwise the current "pre-recursion before we put val[i] in the stack" is going to not realize we already know about the other N values in val.

Basically some differentiation of nodes between "has seen and not yet visited/processed" (not currently in the code AFAIU) and "has finished visiting/actually processed" (an entry in stack AFAIU).

I am not an algorithms expert, but I vaguely recall seeing this distinction in graph traversal algorithms last time I was studying for interviews. :-P :-)

Perhaps the code is already doing this and I'm just missing it. Not sure.

Also fwiw I tried to re-create our graph with just a bunch of in-memory nodes with like 2-levels of 100 "parents" and 5 children-per-parent and failed to trigger this behavior. I think the input really does need to be pathologically circular, which is hard to create by hand (the graph I'm using is an actual one from production, hence the inherent size/complexity). I can try to synthesize also pathological input here & there/as I get time.

(Also, just to be clear, we never wanted fast-safe-stringify to actually output a sane value for this data structure/graph, it just happened to leak into a logger.error({ err: ... }, ...) without really meaning to, and then tada our CPU is pegged and the app is dead. We've since implemented toJSON on our entities and EntityManager to stop this from happening, but I followed up here as an extra/future precaution + FYI.)

...ah yeah, so this is really ugly, but just doing:

      if (stack[i] === val || (Array.isArray(stack[i]) && stack[i].includes(val))) {

Stopped the pathological behavior. I have no idea is the resulting behavior is correct (i.e. I didn't inspect the output and it could be doing 'wrong things'), just that it stops the "not technically infinite but just a lot of it" recursion.

Hopefully that is helpful...

@smartinio
Copy link

@mcollina Is this solved by the fixes in 2.1.0 and 2.1.1?

@mcollina
Copy link
Collaborator

mcollina commented Feb 1, 2022

I don't know and I don't have time to test. I think so.

@mcollina mcollina closed this as completed Feb 1, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants