security
15 min read

We Social Engineered LLMs Into Breaking Their Own Alignment

Exploring how social engineering techniques can be used to manipulate LLMs into bypassing their safety measures

3 min read

jailbreaks

Exploring the Latest AI Models I've Jailbroken

2 min read

LLM Safety Challenge

Step into the arena of AI vs Human! Can you outsmart the robust security layers of a language model to uncover a hidden …

10 min read

Vulnyzer

A Streamlit app to interact with GPT-4 and GPT-3.5 using GitHub Copilot workaround

6 min read

How users are getting free access to GPT-4?!

...kinda weird people getting access to paid models for free and here I'm prompting as less as possible to save tokens …