Skip to content
This repository has been archived by the owner on Oct 18, 2021. It is now read-only.

Pyspark 可以使用 Nebula Spark Connector 吗? #122

Closed
wuyanxin opened this issue Aug 18, 2021 · 3 comments
Closed

Pyspark 可以使用 Nebula Spark Connector 吗? #122

wuyanxin opened this issue Aug 18, 2021 · 3 comments
Labels
enhancement New feature or request

Comments

@wuyanxin
Copy link

wuyanxin commented Aug 18, 2021

是否有像 elasticsearch 那样的方法来导入到 nebula

    options = OrderedDict()
    options["es.nodes"] = ['127.0.0.1:9200']
    options["es.index.auto.create"] = "true"
    options["es.resource"] = "nebula/docs"

    df.write.format("org.elasticsearch.spark.sql") \
            .options(**options) \
            .save(mode='append')
@wey-gu
Copy link
Contributor

wey-gu commented Aug 20, 2021

Thank you @wuyanxin for the question.
@Nicole00 could you help with this?
Could we do something like this?

from py4j.java_gateway import java_import
java_import(sc._gateway.jvm,"org.foo.module.Foo")

func = sc._gateway.jvm.Foo()
func.fooMethod()

@wey-gu wey-gu added the enhancement New feature or request label Aug 20, 2021
@wey-gu
Copy link
Contributor

wey-gu commented Aug 20, 2021

Checked with @Nicole00 that we didn't provide the PySpark interface for now.
I Labeled it as an enhancement.
Also, not sure if the py4j mitigation could help a little bit before official support.

@Nicole00
Copy link
Contributor

do not support.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants