IzAaX
04/29/2025, 9:09 AMdmitry.tunikov
04/30/2025, 7:40 AMTony
04/30/2025, 1:22 PMtext_format
parameter supported for OpenAI’s responses.parse()
method in promptfoo? I think this is the newest and preferred way to do structured outputs with OpenAI.kira
05/03/2025, 7:29 AMError running redteam: Error: Validation failed for plugin intent: Error: Invariant failed: Intent plugin requires `config.intent` to be set
Has anyone faced this before? Any idea what config.intent needs to be set to, or where exactly this should be configured? 🤔
Appreciate any guidance 🙏davidfineunderstory
05/05/2025, 4:30 PMSource Text:
[object Object],[object Object]
How can I make sure my original prompt is properly displayed to the g-eval prompt?ert
05/05/2025, 11:51 PMert
05/06/2025, 1:27 PMRob
05/07/2025, 9:07 PMaldrich
05/09/2025, 2:19 AMharpomaxx
05/12/2025, 8:50 PM- id:ollama:chat:qwen2.5:b
config:
ollama_base_host:"http://10.20.30.40:11434"
ahmedelbaqary.
05/12/2025, 9:18 PMjeffveen
05/14/2025, 5:26 AMproviders:
- id: openai:chat:gpt-4.1
config:
apiKey: sk-...
mcp:
enabled: true
server:
url: "https://[redacted].com/sse"
name: "mcp-dev"
The server is working fine with other mcp clients (Cursor, Claude desktop), but when running tests they fail with:
Error: Failed to connect to MCP server mcp-dev: Remote MCP servers are not supported. Please use a local server file or npm package.
$  npx -y promptfoo@latest --version
0.112.5
Rohit Jalisatgi - Palosade
05/16/2025, 6:43 PMrajendra.gola_86260
05/19/2025, 5:58 AMrajendra.gola_86260
05/19/2025, 5:59 AMJoshua Frank
05/20/2025, 6:12 AMahmedelbaqary.
05/20/2025, 9:24 PMJoshua Frank
05/21/2025, 12:55 AMrajendra.gola_86260
05/22/2025, 8:44 AMJohnRoy
05/23/2025, 12:36 PMroy
05/25/2025, 12:33 AMpyq
05/29/2025, 7:34 AMraxrb
05/29/2025, 5:56 PMnomo-fomo
05/31/2025, 5:54 AMnomo-fomo
05/31/2025, 5:56 AMgrj373
06/02/2025, 2:21 PMnomo-fomo
06/05/2025, 2:05 AMDonato Azevedo
06/05/2025, 1:52 PM- type: python
value: ('Outorgar Poderes' in output['pessoas_analisadas'][1]['restricoes_valor'] and '12' in output['pessoas_analisadas'][1]['restricoes_valor']['Outorgar Poderes']['valor_alcada'] and 'meses' in output['pessoas_analisadas'][1]['restricoes_valor']['Outorgar Poderes']['valor_alcada'])
metric: alcada
And this is not even robust, becase it depends on the order of the output['pessoas_analisadas']
list being consistent across different evals.
I'd appreciate any sugestion. Meanwhile, I was even considering contributing a transform
property to assert-sets, which would enable this kind of syntax:
tests:
- description: test for persona 1
vars:
- file://path/to/pdf
assert:
- type: assert-set
transform: next(o for o in output['pessoas_analisadas'] if o['nome'] == 'NAME OF PERSON')
assert:
- type: python
value: ('Outorgar Poderes' in output['restricoes_valor'] and '12' in output['restricoes_valor']['Outorgar Poderes']['valor_alcada'] ...
Opinions?Donato Azevedo
06/05/2025, 5:05 PMpython
def get_assert(output: dict[str, any], context) -> bool | float | GradingResult:
return {
'pass': True,
'score': 0.11,
'reason': 'Looks good to me',
'named_scores': {
'answer_similarity': 0.12,
'answer_correctness': 0.13,
'answer_relevancy': 0.14,
}
}
I was expecting to see the three answer_*
named metrics appearing up top
https://cdn.discordapp.com/attachments/1380231112252723332/1380231112629944341/Screenshot_2025-06-05_at_14.04.57.png?ex=68431fe4&is=6841ce64&hm=1e59574abdb29e00278b719905d74efcea8620d330947203cbe31d0c4fc9301b&Bryson
06/06/2025, 7:04 PM[chat.js:161] completions API response: {"id":"chatcmpl-BfWK1RSvr3LAckI7hoUHM9dYo0Zce","object":"chat.completion","created":1749235393,"model":"gpt-4o-mini-2024-07-18","choices":[{"index":0,"message":{}
<anonymous_script>:430
SyntaxError: Expected ',' or '}' after property value in JSON at position 1966 (line 430 column 1)
at JSON.parse (<anonymous>)
at encodeMathPrompt (/opt/homebrew/Cellar/promptfoo/0.114.5/libexec/lib/node_modules/promptfoo/dist/src/redteam/strategies/mathPrompt.js:95:32)
at process.processTicksAndRejections (node:internal/process/task_queues:105:5)
at async addMathPrompt (/opt/homebrew/Cellar/promptfoo/0.114.5/libexec/lib/node_modules/promptfoo/dist/src/redteam/strategies/mathPrompt.js:122:33)
at async action (/opt/homebrew/Cellar/promptfoo/0.114.5/libexec/lib/node_modules/promptfoo/dist/src/redteam/strategies/index.js:195:34)
at async applyStrategies (/opt/homebrew/Cellar/promptfoo/0.114.5/libexec/lib/node_modules/promptfoo/dist/src/redteam/index.js:241:35)
at async synthesize (/opt/homebrew/Cellar/promptfoo/0.114.5/libexec/lib/node_modules/promptfoo/dist/src/redteam/index.js:678:85)
at async doGenerateRedteam (/opt/homebrew/Cellar/promptfoo/0.114.5/libexec/lib/node_modules/promptfoo/dist/src/redteam/commands/generate.js:243:88)
Is this a Promptfoo bug by chance? Or is it possible I'm doing something wrong? Happy to DM over my promptfooconfig.yaml if helpful