b00l_
03/12/2025, 3:37 PMtargets:
- id: https
label:
config:
url: "https://my-target.tld/eval"
method: "POST"
headers:
Content-Type: "application/json"
Authorization: "Bearer xxxxx"
body: {"text":"{{prompt}}"}
redteam:
purpose: "something."
numTests: 5
plugins:
- hijacking # Tests for unauthorized resource usage and purpose deviation
- pii:api-db # Tests for PII exposure via API/database access
- pii:direct # Tests for direct PII exposure vulnerabilities
- pii:session # Tests for PII exposure in session data
- pii:social # Tests for PII exposure via social engineering
- politics # Tests handling of political content and bias
# Attack methods for applying adversarial inputs
strategies:
- basic
- jailbreak # Single-shot optimization of safety bypass techniques
- jailbreak:composite # Combines multiple jailbreak techniques for enhanced effectiveness
provider: deepseek:deepseek-chat
I set the DEEPSEEK_API_KEY var and run promptfoo redteam eval (I already generated prompts with promptfoo redteam generate), but it continue to try openai key, getting this error in the console `Target error: OpenAI API key is not set. Set the OPENAI_API_KEY environment variable or add apiKey
to the provider config.. Full response: {"output":"","error":"OpenAI API key is not set. Set the OPENAI_API_KEY environment variable or add apiKey
to the provider config.","to}`
hoe can I force it to use the model confiured in the promptfooconfig.yaml?Rylan
03/13/2025, 7:23 AMReznov
03/14/2025, 8:33 AMTitoo2580
03/14/2025, 7:00 PMRohit Jalisatgi - Palosade
03/15/2025, 4:24 AMellebarto
03/17/2025, 2:23 PMdreamspider42
03/18/2025, 10:37 PMdreamspider42
03/19/2025, 11:41 PMCasetextJake
03/25/2025, 8:43 PM- id: anthropic:messages:claude-3-7-sonnet-20250219
label: anthropic-3.7-sonnet-no-thinking
config:
max_tokens: 40000
temperature: 0
thinking:
type: 'disabled'
- id: anthropic:messages:claude-3-7-sonnet-20250219
label: anthropic-3.7-sonnet-thinking
config:
temperature: 0
max_tokens: 40000
thinking:
type: 'enabled'
budget_tokens: 32000
harpomaxx
03/25/2025, 10:17 PMGia Duc
03/30/2025, 2:09 PM[
{
“description”:“Query downloaded files on specific day?“,
“vars”:{
“prompt”:[
“How many files were downloaded on {{weekdays}}?“,
“Show me the file types downloaded every {{weekdays}}.”
],
“weekdays”:[
“sunday”,
“monday”,
“tuesday”,
“wednesday”,
“thursday”,
“friday”,
“saturday”
]
},
“assert”:[
{
“type”:“contains-any”,
“value”:[
“interval=1w download=true | {{weekdays}}=count(_time[day]={{day_index}}) | top(file_type)”
]
}
]
}
]
subzer0
04/01/2025, 11:45 AMpatsu
04/02/2025, 3:53 PMb00l_
04/02/2025, 10:55 PM{ "flagged": true/false, "category": "something" }
, the config looks like
targets:
- id: 'file://custom_guard.py'
config:
endpoint: '{{env.ENDPOINT}}'
key: '{{env.TOKEN}}'
redteam:
plugins: ...
now my question is, how can I check for flagged and group by categories returned?
thnakspatsu
04/03/2025, 3:30 PMSunny25
04/05/2025, 11:01 AMblue.yeen
04/05/2025, 7:48 PMRohit Jalisatgi - Palosade
04/07/2025, 10:56 PMblue.yeen
04/10/2025, 6:25 PMericchaves
04/11/2025, 12:46 AMSeanYang15
04/15/2025, 9:23 PMpromptfoo eval --output eval-result.csv
I only get the result column. Did I miss something or is this a bug?varunmehra
04/16/2025, 7:08 AMDr.Scorpion
04/16/2025, 11:42 AMSeanYang15
04/16/2025, 11:33 PMadanceofshells
04/22/2025, 7:04 AMsidd
04/22/2025, 9:40 PMYurii
04/23/2025, 9:27 AMcybr_knght
04/23/2025, 7:45 PMyaml
# yaml-language-server: $schema=https://promptfoo.dev/config-schema.json
description: AI Framework - llama32
providers: openai:"NAME: llama3.2"
- id: openai:
label: ai-llama32
prompts:
- '{{prompt}}'
redteam:
purpose: ''
plugins:
- id: harmful:hate # Tests handling of hate speech and discriminatory content
- id: harmful:self-harm # Tests handling of self-harm and suicide content
strategies:
- id: jailbreak # Single-shot optimization of safety bypass techniques
- id: jailbreak:composite # Combines multiple jailbreak techniques for enhanced effectiveness
defaultTest:
options:
transformVars: '{ ...vars, sessionId: context.uuid }'
I am passing the OPENAI_API_KEY and OPENAI_BASE_URL as environment variables. The problem that I am running into is that whoever setup this endpoint decided that the model name should be 'NAME: ', like in the config above. The colon in the model name seems to be the issue.
I have tried escaping the colon, surrounding it with quotes, encoding the colon, but no matter what I get the following:
shell
[util.js:63] Error in extraction: API error: 400 Bad Request
{"detail":"Model not found"}
I even tried specifying the model ID 'protected.llama3.2', but it gives the same error. Any ideas or direction would be appreciated.IzAaX
04/29/2025, 9:09 AMdmitry.tunikov
04/30/2025, 7:40 AM