The latest model from Deepseek, a Chinese AI company that is shake Silicon Valley and Wall Street, can be manipulated to create harmful content, such as plans to attack Bio-Dabir and a campaign to promote self-confidence among teenagers, According to the Wall Street Journal.
Rubin himself, a Senior Vice President in the Treatment and response Department Palo Alto Networks, told the magazine that Deepseek was “more sensitive to Jailbreaking [i.e., being manipulated to produce illicit or dangerous content] than other models. “
The magazine also tested the Deepseek model R1 himself. Although there seemed to be basic protective measures, Journal said he successfully convinced Deepsek to design a campaign of social media, which, according to Chatbot, “captivates the desire of teens on belonging, a weapon of emotional vulnerability with algorithmic amplifiers.”
Chatbot was also convinced of providing instructions for attacks on biologicaloapon, to write a Pro-Hitler manifest and write an e-mail from the theft of identity with a malicious software code. The magazine said that when Chatgpt got the same instructions, he refused to adhere to.
Any previously reported that the Depseek app avoids topics such as Tianamen Square or Taiwanese autonomy. And anthropic CEO Dario Amodia recently said Deepseek performed the “worst” On the security test of the biological car -Auto -a.