Skip to content

ashfakshibli/AbuseGPT

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 

Repository files navigation

AbuseGPT

The issue of SMS phishing, commonly referred to as "smishing," poses a significant threat by deceiving individuals into revealing sensitive information or accessing malicious links via fraudulent text messages on mobile devices. Recent data indicates substantial financial losses, with the United States alone experiencing approximately $44 billion in damages due to SMS phishing in 2021. Moreover, there has been a notable surge in malicious phishing messages, skyrocketing by 1,265% since Q4 of 2022, with SMS phishing constituting 39% of all mobile-based attacks in 2023.

Furthermore, the evolution of conversational AI chatbot services, exemplified by platforms such as OpenAI's ChatGPT and Google's BARD, has been remarkable. These services, powered by large pre-trained language models (LLMs), have seen significant advancements. Our research dives into the potential repercussions of leveraging these generative AI-based chatbots by attackers to orchestrate smishing campaigns. Notably, there is a dearth of existing literature addressing the intersection of generative text-based models and the SMS phishing threat, making our study pioneering in this domain.

Our investigation yields compelling evidence indicating how attackers can exploit existing generative AI services by employing prompt injection attacks to craft smishing messages, thereby circumventing ethical standards. We underscore the necessity of proactive measures to counter the abuse of generative AI services and mitigate the risks posed by smishing attacks. Additionally, we offer insights into potential avenues for future research and guidelines aimed at safeguarding users against such malicious activities.

Case Study: Smish Campaigns with ChatGPT

The following screenshots are in chronological order 1 single conversation.


RQ1: Can we jailbreak ChatGPT to downgrade their ethical standards?

1_chatgpt_asking_smishing
1. Asking ChatGPT directly to give an SMS phishing Message without jailbreaking


2_chatgpt_asking_reverse_psych
2. Asking ChatGPT reverse prompt to give an SMS phishing Message without jailbreaking


3_chatgpt_asking_example
3. Asking for example SMS phishing Messages without jailbreaking


4_chatgpt_jailbreak_prompt
4. Jailbreaking ChatGPT with a hypothetical story named AIM

ChatGPT Successfully Jailbroken
RQ2: Can ChatGPT provide smishing text messages that can be used in smishing campaigns?

5_chatgpt_jailbreak_answer
5. ChatGPT responded by Rephrasing the question and giving multicategory examples


6_chatgpt_jaibreak_smish_eg
6. Gives more examples of obtaining personal information using smishes


7_chatgpt_jailbreak_unusual_example
7. Asking ChatGPT to provide innovative smishing examples


8_chatgpt_asking_avoid_common
8. Trying to get examples that are common


9_chatgpt_givining_uncommon
9. Provides more uncommon and typically unseen smishing messages


10_chatgpt_giving_financial_gain_ideas
10. Ideas for financial gain as a novice attacker

ChatGPT Successfully Provided lots of innovative smishing examples
RQ3: Can ChatGPT provide tool recommendations for smishing attack initiation?

11_chatgpt_endtoend_process
11. Getting start-to-end steps plan from ChatGPT


12_chatgpt_asking_smish_toolkits
Get some available toolkits to execute


13_chatgpt_providing_toolkit_link
12. Asking for toolkits' links


14_chatgpt_even_darknet_toolkit
13. ChatGPT provides "Helix" a money laundering platform


ChatGPT Provided lots of tools for attack initiation
RQ4: Can ChatGPT provide ideas on fake URL creation?

chatgpt_jailbreak_fraud_smish_w_link
14. ChatGPT provides fake links


chatgpt_jailbreak_fraud_link
15. ChatGPT provides smishing texts with the fake links


Case Study: Smish Campaigns Abusing BARD

[1_bard_jaibreak_aim_not_working
1. Bard is not able to process the AIM Jailbreaking prompt


[2_bard_jaibreak_Kevin_not_working
2. Bard not able to process KEVIN Jailbreaking prompt


[3_bard_GPT4Sim_working
3. Asking Bard in disguise code problem


[4_bard_gpt4_simulator_response
4. Bard response in disguise code solution smishing message


[5_bard_vzex_g_working
5. Getting smishing examples Vxez_G jailbreaking and reverse psychology


[6_bard_condition_red
6. Bard responding ethically for some prompts


About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published