The issue of SMS phishing, commonly referred to as "smishing," poses a significant threat by deceiving individuals into revealing sensitive information or accessing malicious links via fraudulent text messages on mobile devices. Recent data indicates substantial financial losses, with the United States alone experiencing approximately $44 billion in damages due to SMS phishing in 2021. Moreover, there has been a notable surge in malicious phishing messages, skyrocketing by 1,265% since Q4 of 2022, with SMS phishing constituting 39% of all mobile-based attacks in 2023.
Furthermore, the evolution of conversational AI chatbot services, exemplified by platforms such as OpenAI's ChatGPT and Google's BARD, has been remarkable. These services, powered by large pre-trained language models (LLMs), have seen significant advancements. Our research dives into the potential repercussions of leveraging these generative AI-based chatbots by attackers to orchestrate smishing campaigns. Notably, there is a dearth of existing literature addressing the intersection of generative text-based models and the SMS phishing threat, making our study pioneering in this domain.
Our investigation yields compelling evidence indicating how attackers can exploit existing generative AI services by employing prompt injection attacks to craft smishing messages, thereby circumventing ethical standards. We underscore the necessity of proactive measures to counter the abuse of generative AI services and mitigate the risks posed by smishing attacks. Additionally, we offer insights into potential avenues for future research and guidelines aimed at safeguarding users against such malicious activities.
RQ1: Can we jailbreak ChatGPT to downgrade their ethical standards?
1. Asking ChatGPT directly to give an SMS phishing Message without jailbreaking
2. Asking ChatGPT reverse prompt to give an SMS phishing Message without jailbreaking
3. Asking for example SMS phishing Messages without jailbreaking
4. Jailbreaking ChatGPT with a hypothetical story named AIM
ChatGPT Successfully Jailbroken
RQ2: Can ChatGPT provide smishing text messages that can be used in smishing campaigns?
5. ChatGPT responded by Rephrasing the question and giving multicategory examples
6. Gives more examples of obtaining personal information using smishes
7. Asking ChatGPT to provide innovative smishing examples
8. Trying to get examples that are common
9. Provides more uncommon and typically unseen smishing messages
10. Ideas for financial gain as a novice attacker
ChatGPT Successfully Provided lots of innovative smishing examples
RQ3: Can ChatGPT provide tool recommendations for smishing attack initiation?
11. Getting start-to-end steps plan from ChatGPT
Get some available toolkits to execute
12. Asking for toolkits' links
13. ChatGPT provides "Helix" a money laundering platform
ChatGPT Provided lots of tools for attack initiation
RQ4: Can ChatGPT provide ideas on fake URL creation?
14. ChatGPT provides fake links
15. ChatGPT provides smishing texts with the fake links
1. Bard is not able to process the AIM Jailbreaking prompt
2. Bard not able to process KEVIN Jailbreaking prompt
3. Asking Bard in disguise code problem
4. Bard response in disguise code solution smishing message
5. Getting smishing examples Vxez_G jailbreaking and reverse psychology
6. Bard responding ethically for some prompts