The purpose of the platform is to make LLM red-teaming an open-source task. We deploy LLM models in a Trusted Execution Environment so every response the LLM generates is provable. People can then try and manipulate the LLM into saying words that it has been trained not to. When they successfuly manipulate the LLM, they can submit the proof of execution on blockchain and get a bounty paid out directly on the blockchain.
Deployment & video in progress