-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat (ai/core): Proposal for generate code API with tools #4196
base: main
Are you sure you want to change the base?
Conversation
New dependencies detected. Learn more about Socket for GitHub ↗︎
|
This is pretty cool. However, there is one big issue: it includes a fairly large prompt. The AI SDK tries to not include any prompts whenever possible, because they end up being provider and model dependent (there is some minimal prompting in JSON generation but that is all). Do you see ways to do this without prompting? Or could there be alternative approaches that minimize it? |
packages/ai/core/tool/tool.ts
Outdated
@@ -56,21 +63,16 @@ If not provided, the tool will not be executed automatically. | |||
@args is the input of the tool call. | |||
@options.abortSignal is a signal that can be used to abort the tool call. | |||
*/ | |||
execute?: ( | |||
execute: ( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Removing the optionality here will break important functionality, namely tools without execute.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I got three alternatives idea for this issue
-
Create an isolated tool in different name (eg:
etool
specifically forgenerateCode
-
add one more option to
toolChoice: "none" | "auto" | "required" | "code"
(thiscode
helps to makeexecute
strictly required -
throw error is execute param is undefined
Give me a feedback, I'll rework on this
packages/ai/core/tool/tool.ts
Outdated
@@ -94,28 +96,28 @@ The arguments for configuring the tool. Must match the expected arguments define | |||
Helper function for inferring the execute args of a tool. | |||
*/ | |||
// Note: special type inference is needed for the execute function args to make sure they are inferred correctly. | |||
export function tool<PARAMETERS extends Parameters, RESULT>( | |||
export function tool<PARAMETERS extends Parameters, RESULT extends Parameters>( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
RESULT extends Parameters
seems strange. Is this intentional?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
generateCode
requires an returns zodSchema
so LLM can understand whats the output of each tools.
It'll help LLM to create variables to save the tool-calling-result and pass it to next tool.
Reason why i used Parameters
type is to strictly specify execute tool according to its returns: zodSchema
I could rework this as
type Returns = Parameters // same properties
export function tool<PARAMETERS extends Parameters, RESULT, RETURNS extends Returns>(
tool: CoreTool<PARAMETERS, RETURNS> & {
execute: (
args: inferParameters<PARAMETERS>,
options: ToolExecutionOptions,
) => PromiseLike<inferParameters<RETURNS>>;
}
)
ed29a80
to
dd58085
Compare
Proposal for generateCode API
Imagine the LLM can write Javascript code with custom logic within the help of limited tools inside a safe eval( )
Usage
Let me show you an example on how to build an AI powered banking app
Define a set of tools
Fun part begins here
This is the output after execution
await result.execute( )
This is the code written by LLM
result.code
This is the JSON schema written by LLM
result.schema
(useful for generative UI)Instead of multi-step toolResults, now LLM can write logic along with tools provided by developer. Also the LLM can't execute malicious code because of a safety simple technique i implemented in this.
Therefor this LLM is restricted to only invoke list of functions we provided to them.
The
generateCode( )
API is a powerful wrapper aroundgenerateText( )