Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,12 +45,12 @@
* Initial release of the Agent Development Kit (ADK).
* Multi-agent, agent-as-workflow, and custom agent support
* Tool authentication support
* Rich tool support, e.g. bult-in tools, google-cloud tools, third-party tools, and MCP tools
* Rich tool support, e.g. built-in tools, google-cloud tools, third-party tools, and MCP tools
* Rich callback support
* Built-in code execution capability
* Asynchronous runtime and execution
* Session, and memory support
* Built-in evaluation support
* Development UI that makes local devlopment easy
* Development UI that makes local development easy
* Deploy to Google Cloud Run, Agent Engine
* (Experimental) Live(Bidi) auido/video agent support and Compositional Function Calling(CFC) support
2 changes: 1 addition & 1 deletion pylintrc
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ confidence=
# can either give multiple identifiers separated by comma (,) or put this
# option multiple times (only on the command line, not in the configuration
# file where it should appear only once).You can also use "--disable=all" to
# disable everything first and then reenable specific checks. For example, if
# disable everything first and then re-enable specific checks. For example, if
# you want to run only the similarities checker, you can use "--disable=all
# --enable=similarities". If you want to run only the classes checker, but have
# no Warning level messages displayed, use"--disable=all --enable=classes
Expand Down
2 changes: 1 addition & 1 deletion src/google/adk/cli/cli_eval.py
Original file line number Diff line number Diff line change
Expand Up @@ -256,7 +256,7 @@ def run_evals(
)

if final_eval_status == EvalStatus.PASSED:
result = "✅ Passsed"
result = "✅ Passed"
else:
result = "❌ Failed"

Expand Down
4 changes: 2 additions & 2 deletions src/google/adk/evaluation/agent_evaluator.py
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ def load_json(file_path: str) -> Union[Dict, List]:


class AgentEvaluator:
"""An evaluator for Agents, mainly intented for helping with test cases."""
"""An evaluator for Agents, mainly intended for helping with test cases."""

@staticmethod
def find_config_for_test_file(test_file: str):
Expand Down Expand Up @@ -91,7 +91,7 @@ def evaluate(
look for 'root_agent' in the loaded module.
eval_dataset: The eval data set. This can be either a string representing
full path to the file containing eval dataset, or a directory that is
recusively explored for all files that have a `.test.json` suffix.
recursively explored for all files that have a `.test.json` suffix.
num_runs: Number of times all entries in the eval dataset should be
assessed.
agent_name: The name of the agent.
Expand Down
4 changes: 2 additions & 2 deletions src/google/adk/evaluation/response_evaluator.py
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ def evaluate(
Args:
raw_eval_dataset: The dataset that will be evaluated.
evaluation_criteria: The evaluation criteria to be used. This method
support two criterias, `response_evaluation_score` and
support two criteria, `response_evaluation_score` and
`response_match_score`.
print_detailed_results: Prints detailed results on the console. This is
usually helpful during debugging.
Expand All @@ -56,7 +56,7 @@ def evaluate(
Value range: [0, 5], where 0 means that the agent's response is not
coherent, while 5 means it is . High values are good.
A note on raw_eval_dataset:
The dataset should be a list session, where each sesssion is represented
The dataset should be a list session, where each session is represented
as a list of interaction that need evaluation. Each evaluation is
represented as a dictionary that is expected to have values for the
following keys:
Expand Down
9 changes: 4 additions & 5 deletions src/google/adk/evaluation/trajectory_evaluator.py
Original file line number Diff line number Diff line change
Expand Up @@ -31,10 +31,9 @@ def evaluate(
):
r"""Returns the mean tool use accuracy of the eval dataset.

Tool use accuracy is calculated by comparing the expected and actuall tool
use trajectories. An exact match scores a 1, 0 otherwise. The final number
is an
average of these individual scores.
Tool use accuracy is calculated by comparing the expected and the actual
tool use trajectories. An exact match scores a 1, 0 otherwise. The final
number is an average of these individual scores.

Value range: [0, 1], where 0 is means none of the too use entries aligned,
and 1 would mean all of them aligned. Higher value is good.
Expand All @@ -45,7 +44,7 @@ def evaluate(
usually helpful during debugging.

A note on eval_dataset:
The dataset should be a list session, where each sesssion is represented
The dataset should be a list session, where each session is represented
as a list of interaction that need evaluation. Each evaluation is
represented as a dictionary that is expected to have values for the
following keys:
Expand Down
2 changes: 1 addition & 1 deletion src/google/adk/flows/llm_flows/agent_transfer.py
Original file line number Diff line number Diff line change
Expand Up @@ -94,7 +94,7 @@ def _build_target_agents_instructions(

If another agent is better for answering the question according to its
description, call `{_TRANSFER_TO_AGENT_FUNCTION_NAME}` function to transfer the
question to that agent. When transfering, do not generate any text other than
question to that agent. When transferring, do not generate any text other than
the function call.
"""

Expand Down
2 changes: 1 addition & 1 deletion src/google/adk/flows/llm_flows/base_llm_flow.py
Original file line number Diff line number Diff line change
Expand Up @@ -115,7 +115,7 @@ async def run_live(
yield event
# send back the function response
if event.get_function_responses():
logger.debug('Sending back last function resonse event: %s', event)
logger.debug('Sending back last function response event: %s', event)
invocation_context.live_request_queue.send_content(event.content)
if (
event.content
Expand Down
2 changes: 1 addition & 1 deletion src/google/adk/flows/llm_flows/contents.py
Original file line number Diff line number Diff line change
Expand Up @@ -111,7 +111,7 @@ def _rearrange_events_for_latest_function_response(
"""Rearrange the events for the latest function_response.

If the latest function_response is for an async function_call, all events
bewteen the initial function_call and the latest function_response will be
between the initial function_call and the latest function_response will be
removed.

Args:
Expand Down
4 changes: 2 additions & 2 deletions src/google/adk/flows/llm_flows/instructions.py
Original file line number Diff line number Diff line change
Expand Up @@ -52,15 +52,15 @@ async def run_async(
# Appends global instructions if set.
if (
isinstance(root_agent, LlmAgent) and root_agent.global_instruction
): # not emtpy str
): # not empty str
raw_si = root_agent.canonical_global_instruction(
ReadonlyContext(invocation_context)
)
si = _populate_values(raw_si, invocation_context)
llm_request.append_instructions([si])

# Appends agent instructions if set.
if agent.instruction: # not emtpy str
if agent.instruction: # not empty str
raw_si = agent.canonical_instruction(ReadonlyContext(invocation_context))
si = _populate_values(raw_si, invocation_context)
llm_request.append_instructions([si])
Expand Down
4 changes: 2 additions & 2 deletions src/google/adk/models/gemini_llm_connection.py
Original file line number Diff line number Diff line change
Expand Up @@ -152,7 +152,7 @@ async def receive(self) -> AsyncGenerator[LlmResponse, None]:
):
# TODO: Right now, we just support output_transcription without
# changing interface and data protocol. Later, we can consider to
# support output_transcription as a separete field in LlmResponse.
# support output_transcription as a separate field in LlmResponse.

# Transcription is always considered as partial event
# We rely on other control signals to determine when to yield the
Expand All @@ -179,7 +179,7 @@ async def receive(self) -> AsyncGenerator[LlmResponse, None]:
# in case of empty content or parts, we sill surface it
# in case it's an interrupted message, we merge the previous partial
# text. Other we don't merge. because content can be none when model
# safty threshold is triggered
# safety threshold is triggered
if message.server_content.interrupted and text:
yield self.__build_full_text_response(text)
text = ''
Expand Down
2 changes: 1 addition & 1 deletion src/google/adk/sessions/database_session_service.py
Original file line number Diff line number Diff line change
Expand Up @@ -217,7 +217,7 @@ def __init__(self, db_url: str):
"""
# 1. Create DB engine for db connection
# 2. Create all tables based on schema
# 3. Initialize all properies
# 3. Initialize all properties

try:
db_engine = create_engine(db_url)
Expand Down
2 changes: 1 addition & 1 deletion src/google/adk/sessions/state.py
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ def __init__(self, value: dict[str, Any], delta: dict[str, Any]):
"""
Args:
value: The current value of the state dict.
delta: The delta change to the current value that hasn't been commited.
delta: The delta change to the current value that hasn't been committed.
"""
self._value = value
self._delta = delta
Expand Down
2 changes: 1 addition & 1 deletion src/google/adk/tools/load_artifacts_tool.py
Original file line number Diff line number Diff line change
Expand Up @@ -89,7 +89,7 @@ def _append_artifacts_to_llm_request(
than the function call.
"""])

# Attache the content of the artifacts if the model requests them.
# Attach the content of the artifacts if the model requests them.
# This only adds the content to the model request, instead of the session.
if llm_request.contents and llm_request.contents[-1].parts:
function_response = llm_request.contents[-1].parts[0].function_response
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@ def generate_auth_token(

Returns:
An AuthCredential object containing the HTTP bearer access token. If the
HTTO bearer token cannot be generated, return the origianl credential
HTTP bearer token cannot be generated, return the original credential.
"""

if "access_token" not in auth_credential.oauth2.token:
Expand Down