Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: updated prompt with author and allComments #123

Merged

Conversation

sshivaditya2019
Copy link

Resolves #97

  • Adds a updated prompt which takes in allComments.
  • Updated the comments to evaluate object with the authors of the comment

@sshivaditya2019 sshivaditya2019 marked this pull request as ready for review September 19, 2024 16:05
@sshivaditya2019
Copy link
Author

sshivaditya2019 commented Sep 19, 2024

image

Updated Relevance scores, with scoring for img output.html

Copy link
Member

@0x4007 0x4007 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider using o1-mini? I have access to it now from our org key. I am not sure if it is better for this kind of task. I am under the impression that it is more impressive when passing in minimal information/context.

Given that we are passing in all the context, maybe 4o is superior.

Also you need to program it to respond with json (not via the prompt, but by passing in the config to the openai object)

@sshivaditya2019
Copy link
Author

Consider using o1-mini? I have access to it now from our org key. I am not sure if it is better for this kind of task. I am under the impression that it is more impressive when passing in minimal information/context.

Given that we are passing in all the context, maybe 4o is superior.

Also you need to program it to respond with json (not via the prompt, but by passing in the config to the openai object)

Could you please share the key for o1 through some other channel with a billing limit. Just letting you know it would be a bit expensive with o1 models.

@0x4007
Copy link
Member

0x4007 commented Sep 26, 2024

I don't think it has a limit I just quickly made from my phone and dm you key on telegram.

@sshivaditya2019
Copy link
Author

Also you need to program it to respond with json (not via the prompt, but by passing in the config to the openai object)

response_format: { type: "json_object" }

You mean this one ? The prompt uses that already. I think o1 does not support response_format in output.

@0x4007
Copy link
Member

0x4007 commented Sep 26, 2024

Okay 4o maybe is best

@sshivaditya2019
Copy link
Author

Comments Evaluated:

{
        "id": 1948916343,
        "comment": "pavlovcik i think we need to update a bit the readme\r\n![image_2024-02-16_131036879](https://github.com/ubiquibot/comment-incentives/assets/41552663/41516d66-4666-47d7-9efe-517fb26293dd)\r\ndm what to whom?",
        "author": "molecula451"
      },
      {
        "id": 1948989989,
        "comment": "let us know when done",
        "author": "molecula451"
      },
      {
        "id": 1949195772,
        "comment": "https://github.com/ubiquibot/comment-incentives/actions/runs/7935268560 invalid input sounds unexpected @gitcoindev ??",
        "author": "molecula451"
      },
      {
        "id": 1949564869,
        "comment": "@pavlovcik permitted with hard debug (tho no funds in the private key)",
        "author": "molecula451"
      },
      {
        "id": 1949635137,
        "comment": "pavlovcik i re-generated the X25519 to trigger the permit, what you don't understand? using a private key i own, but also did many commits to reach the root cause",
        "author": "molecula451"
      },
      {
        "id": 1949639196,
        "comment": "sure thing",
        "author": "molecula451"
      }

GPT 4o Output:

[
  {
    "id": 1948916343,
    "connection_score": 0.6
  },
  {
    "id": 1948989989,
    "connection_score": 0.5
  },
  {
    "id": 1949195772,
    "connection_score": 0.7
  },
  {
    "id": 1949564869,
    "connection_score": 0.8
  },
  {
    "id": 1949635137,
    "connection_score": 0.9
  },
  {
    "id": 1949639196,
    "connection_score": 0.3
  }
]

o1 Mini Output:

{
  "1948930217": 0.8,
  "1949201722": 0.7,
  "1949203681": 0.9,
  "1949633751": 0.0,
  "1949639054": 0.3,
  "1949642845": 0.6
}

I think GPT 4o appear to be more on point, than o1 mini.

@sshivaditya2019
Copy link
Author

sshivaditya2019 commented Sep 27, 2024

I think this is good to go. Let me know if there are any other changes apart from the merge conflicts. For QA output.html

@0x4007
Copy link
Member

0x4007 commented Sep 27, 2024

Well you also spelled related wrong but yeah you can fix the merge conflict and your spelling. Needs to pass CI before merging.

@sshivaditya2019
Copy link
Author

@0x4007 Could you please approve the workflow runs ?

@0x4007
Copy link
Member

0x4007 commented Sep 29, 2024

It's not stable. Needs exponential backoff

@sshivaditya2019
Copy link
Author

sshivaditya2019 commented Sep 30, 2024

It's not stable. Needs exponential backoff

CI should be passing now. Jest passes for me locally Workflow Run. @0x4007 Could you the workflows again ?

@0x4007 0x4007 merged commit 08a6949 into ubiquity-os-marketplace:development Sep 30, 2024
3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Refinements
3 participants