beautifulpython
07/28/2025, 9:49 PMSaraswathi Rekhala
07/29/2025, 5:44 AMbakar
07/29/2025, 10:59 PMWaz
07/30/2025, 10:22 AMCYH
07/30/2025, 9:54 PMMyron
07/30/2025, 11:25 PMSaraswathi Rekhala
08/04/2025, 5:19 AMRohit Jalisatgi - Palosade
08/04/2025, 4:32 PMCYH
08/05/2025, 10:18 PMpromptfoo eval -c config.yaml --output result.xml
, I got the following error message. Is this a known issue?
/opt/homebrew/Cellar/promptfoo/0.117.4/libexec/lib/node_modules/promptfoo/node_modules/fast-xml-parser/src/xmlbuilder/json2xml.js:268
textValue = textValue.replace(entity.regex, entity.val);
^
TypeError: textValue.replace is not a function
at Builder.replaceEntitiesValue (/opt/homebrew/Cellar/promptfoo/0.117.4/libexec/lib/node_modules/promptfoo/node_modules/fast-xml-parser/src/xmlbuilder/json2xml.js:268:29)
at Builder.buildTextValNode (/opt/homebrew/Cellar/promptfoo/0.117.4/libexec/lib/node_modules/promptfoo/node_modules/fast-xml-parser/src/xmlbuilder/json2xml.js:252:22)
at Builder.j2x (/opt/homebrew/Cellar/promptfoo/0.117.4/libexec/lib/node_modules/promptfoo/node_modules/fast-xml-parser/src/xmlbuilder/json2xml.js:116:23)
at Builder.processTextOrObjNode (/opt/homebrew/Cellar/promptfoo/0.117.4/libexec/lib/node_modules/promptfoo/node_modules/fast-xml-parser/src/xmlbuilder/json2xml.js:181:23)
at Builder.j2x (/opt/homebrew/Cellar/promptfoo/0.117.4/libexec/lib/node_modules/promptfoo/node_modules/fast-xml-parser/src/xmlbuilder/json2xml.js:140:32)
at Builder.processTextOrObjNode (/opt/homebrew/Cellar/promptfoo/0.117.4/libexec/lib/node_modules/promptfoo/node_modules/fast-xml-parser/src/xmlbuilder/json2xml.js:181:23)
at Builder.j2x (/opt/homebrew/Cellar/promptfoo/0.117.4/libexec/lib/node_modules/promptfoo/node_modules/fast-xml-parser/src/xmlbuilder/json2xml.js:165:21)
at Builder.processTextOrObjNode (/opt/homebrew/Cellar/promptfoo/0.117.4/libexec/lib/node_modules/promptfoo/node_modules/fast-xml-parser/src/xmlbuilder/json2xml.js:181:23)
at Builder.j2x
... truncating the rest because message is too long
Node.js v24.4.1
Rohit Jalisatgi - Palosade
08/06/2025, 12:13 AMGuillermoB
08/06/2025, 9:12 AMJason
08/07/2025, 2:07 AMBrianGenisio
08/08/2025, 5:18 PMprompts:
- role: system
content: file://../../system-1.md
- role: system
content: file://../../system-2.md
- role: user
content: file://../../user.md
tests:
- file://./test_*.yaml
But that's not working for me.
> Invalid configuration file /Users/me/code/evaluations/test1/promptfooconfig.yaml:
> Validation error: Expected string, received array at "prompts", or Expected string, received object at "prompts[0]", or Required at "prompts[0].id", or Required at "prompts[0].raw"; Expected string, received object at "prompts[1]", or Required at "prompts[1].id", or Required at "prompts[1].raw", or Expected object, received array at "prompts"
> Invalid configuration file /Users/me/code/evaluations/test1/promptfooconfig.yaml:
> Validation error: Expected string, received array at "prompts", or Expected string, received object at "prompts[0]", or Required at "prompts[0].id", or Required at "prompts[0].raw"; Expected string, received object at "prompts[1]", or Required at "prompts[1].id", or Required at "prompts[1].raw", or Expected object, received array at "prompts"
> Failed to validate configuration: Invalid prompt object: {"role":"system","label":"system","content":"file://../../system.md"}
What am I doing wrong? How do I define my prompt chain with two system prompts and one user prompt from files?ahmedelbaqary.
08/11/2025, 11:04 AMreasoning_effort: minimal
gives like 10000~15000 Ms on average and always the cost=0
does anyone have explanation for this??
I'm using the nodejs package!! and here is the provider sent
"providers": [
{
"id": "openai:responses:gpt-5-nano",
"config": {
"max_completion_tokens": 4000,
"max_output_tokens": 4000,
"apikey": "api-key-here",
"tools": [],
"tool_choice": "auto",
"reasoning": {"effort": "minimal"}
}
}
],BrianGenisio
08/11/2025, 7:30 PMpromptfoo view
I'd like to be able to control this as something other than 15500. Is there a good way?Waz
08/11/2025, 10:57 PMthe
08/12/2025, 11:45 AMgrj373
08/13/2025, 1:43 PMBryson
08/13/2025, 8:57 PMAWilborn
08/14/2025, 5:06 PMgrj373
08/18/2025, 9:12 AMJosema Blanco
08/18/2025, 12:21 PMCYH
08/18/2025, 8:44 PMDAK
08/18/2025, 9:52 PM--max-concurrency
. The command line reports it as running concurrently when it hasn't
> Duration: 1m 46s (concurrency: 4)
> Successes: 9
> Failures: 0
> Errors: 0
> Pass Rate: 100.00%
I noticed a comment in this issue https://github.com/promptfoo/promptfoo/issues/1280#issuecomment-2251765379 - "We recently refactored evaluations to do providers 1 at a time" and hoping this isn't a permanent loss of functionality. EDIT - (Just noticed the date on that from last year. Probably not related but I couldn't find any other relavant mention). Is there a way I can re-enable concurrent evals? I'm running against my own local server for testing my multi-agent service, the testing configuration allowed me to validate more complex agentic tasks. Maybe HTTP Provider is no longer the best way to handle that?Waz
08/19/2025, 6:01 PMerrors: [
{
code: 'invalid_type',
expected: 'string',
received: 'object',
path: [ 'url' ],
message: 'Expected string, received object'
}
]
Here's my provider:
yaml
providers:
- id: https
label: Base model
config:
url: {{ env.PROVIDER_URL }}
maxRetries: 3
method: POST
headers:
'Content-Type': 'application/json'
'Authorization': 'Bearer {{ env.GOOGLE_ID_TOKEN }}'
body:
agent:
query: '{{query}}'
transformResponse: |
{
output: json.finalMessageContent,
tokenUsage: {
total: json.tokenUsage?.totalTokens || 0,
prompt: json.tokenUsage?.inputTokens || 0,
completion: json.tokenUsage?.outputTokens || 0,
cached: json.tokenUsage?.cacheReadTokens || 0,
numRequests: json.tokenUsage?.llmCalls || 0
},
cost: json.cost
}
Waz
08/20/2025, 5:34 PMfile://path
does not work in my http provider
yaml
body:
query: '{{prompt}}'
date: '2025-06-03T22:01:13.797Z'
transactions: file://./test_data/transactions.csv
This works in the normal evals, but not when red teaming it seems?glutensnake
08/20/2025, 7:57 PMElias_M2M
08/21/2025, 1:46 PMJosema Blanco
08/21/2025, 3:18 PMSuraj
08/22/2025, 10:38 AM