Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Python Dataflow for Group Manager #4926

Merged
merged 212 commits into from
Mar 4, 2021
Merged

Conversation

ervteng
Copy link
Contributor

@ervteng ervteng commented Feb 9, 2021

Proposed change(s)

AgentProcessor, Trajectory, and Buffer changes to get teammate data into trainer from AgentProcessor. Does not add a new trainer, PPO/SAC will ignore existing data. Of note:

  • AgentBufferField can now hold either an np.ndarray or List[np.ndarray] (datatype is designated as BufferEntry). This is so that Group elements can be a list of references to other np.ndarrays, without data duplication.
  • Added a bunch of new elements for BufferKeys for group observations, actions, rewards
  • AgentAction has some new utility methods to grab group actions, and to convert actions to flat for encoding
  • Both AgentAction and Group Observations from_buffer functions pad if the number of agents change. Group Observations pad with NaNs and are used to generate attention mechanism masks, and then replaced with 0's before used for training. AgentAction pads with 0, and not NaN, because the conversion to int tensor for discrete actions does not allow NaNs.

Types of change(s)

  • Bug fix
  • New feature
  • Code refactor
  • Breaking change
  • Documentation update
  • Other (please describe)

Checklist

  • Added tests that prove my fix is effective or that my feature works
  • Updated the changelog (if applicable)
  • Updated the documentation (if applicable)
  • Updated the migration guide (if applicable)

Other comments

@ervteng ervteng changed the title AgentProcessor changes for TeamManager Python Dataflow for Group Manager Feb 24, 2021
@@ -49,6 +50,16 @@ def __init__(
"""
self.experience_buffers: Dict[str, List[AgentExperience]] = defaultdict(list)
self.last_step_result: Dict[str, Tuple[DecisionStep, int]] = {}
# current_group_obs is used to collect the last seen obs of all the agents in the same group,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
# current_group_obs is used to collect the last seen obs of all the agents in the same group,
# current_group_obs is used to collect the current obs of all the agents in the same group,

# Clear the last seen group obs when agents die.
self._clear_group_obs(global_id)

# Clean the last experience dictionary for terminal steps
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

legacy code?

global_agent_id, None
)
if stored_decision_step is not None and stored_take_action_outputs is not None:
if step.group_id > 0:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is using implicit assumption that 0 means no group. I'd add some comments or put it into a separate function

Copy link
Contributor Author

@ervteng ervteng Feb 24, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added a comment - we can add a utility method too, step.has_agent_group? I didn't want to add anything to the API

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah I was thinking more like a utility method

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah yeah that's not a bad idea. Let's do a separate PR to add this, and add the group ID to the documentation for the Python API

self.current_group_obs: Dict[str, Dict[str, List[np.ndarray]]] = defaultdict(
lambda: defaultdict(list)
)
# last_group_obs is used to collect the last seen obs of all the agents in the same group,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
# last_group_obs is used to collect the last seen obs of all the agents in the same group,
# group_status is used to collect the last seen obs of all the agents in the same group,


# Assemble teammate_obs. If none saved, then it will be an empty list.
group_statuses = []
for _id, _obs in self.group_status[global_group_id].items():
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

rename variable _obs to _mate_status ?

next_obs = step.obs
next_group_obs = []
for _id, _exp in self.current_group_obs[global_group_id].items():
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

_exp -> _obs

"""
action_shape = None
for _action in agent_buffer_field:
if _action:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What possible values can _action take? Is this check really needed ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

_action could be an empty list if there are no group actions at that step. This check goes through and finds the first non-empty list. Added a comment.

if _action:
action_shape = _action[0].shape
break
# If there were no critic obs at all
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This comment is confusing. There is no mention of critics either in code or variable names in this static function. I think this method is meant to be used in a very specific context so it should be described as such.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was left over from when group_obs were called critic_obs. Updated.

agent_buffer_field: AgentBufferField, dtype: torch.dtype = torch.float32
) -> List[torch.Tensor]:
"""
Pad actions and convert to tensor. Pad the data with 0s where there is no
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why does this method only pad actions ? Can it be made more general ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After some thought - I can move it to be an AgentBufferField method, but it will make the buffer no longer torch-agnostic. If we're OK with this it's a decent place for it to live.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I can do it with just numpy arrays.

buff: AgentBuffer, cont_action_key: BufferKey, disc_action_key: BufferKey
) -> List["AgentAction"]:
continuous_tensors: List[torch.Tensor] = []
discrete_tensors: List[torch.Tensor] = [] # type: ignore
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why tyoe:ignore. Is there an issue here ? Can you comment on why we need to ignore the type here ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Typo - removed

return new_list

@staticmethod
def _group_from_buffer(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add code comments. This method returns a list of AgentAction and I think it should be reflected in the name.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Renamed and added comment

buff, BufferKey.GROUP_NEXT_CONT_ACTION, BufferKey.GROUP_NEXT_DISC_ACTION
)

def to_flat(self, discrete_branches: List[int]) -> torch.Tensor:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a public method right? Please add a comment on what this does.

@@ -11,6 +13,17 @@
from mlagents.trainers.torch.action_log_probs import LogProbsTuple


class GroupmateStatus(NamedTuple):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What could be alternative names for this class? I think Groupmate is not needed, this does not need to be for a teammate right? This is how we use it here, but it could be used as a general AgentStatus. So I would name it AgentStatus or something similar.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This might be a good candidate to combine with the DemonstrationExperience. There seems to be a common need for an AgentExperience that doesn't contain all of the policy-related information (e.g. log_probs). I can think of three options:

  • Have a base AgentStatus with the minimal information, and inherit the larger AgentExperience from that.
  • Have different classes for each of the use-cases (which is what we have now with these two PRs)
  • Have a single large AgentExperience with Optional fields

cc: @chriselion

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Renamed it to AgentStatus for now.

"""
return ObservationKeyPrefix.NEXT_GROUP_OBSERVATION, index

@staticmethod
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think there is some duplicate code with agent_action.py

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Moved this code to AgentBufferField

# Iterate over all the terminal steps, first gather all the teammate obs
# and then create the AgentExperiences/Trajectories
for terminal_step in terminal_steps.values():
self._gather_group_obs(terminal_step, worker_id)
Copy link
Contributor

@andrewcoh andrewcoh Feb 24, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we add a comment that _gather_team_obs stores the groupmate info from the terminal steps and then also the decision steps in the same data structure self.group_status a few lines below. Reading it, I thought we were only grabbing groupmate data from the terminal steps.

Base automatically changed from master to main February 25, 2021 19:16
@@ -49,6 +50,16 @@ def __init__(
"""
self.experience_buffers: Dict[str, List[AgentExperience]] = defaultdict(list)
self.last_step_result: Dict[str, Tuple[DecisionStep, int]] = {}
# current_group_obs is used to collect the current, most recently seen
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
# current_group_obs is used to collect the current, most recently seen
# current_group_obs is used to collect the current (i.e. the most recently seen)

@@ -49,6 +50,16 @@ def __init__(
"""
self.experience_buffers: Dict[str, List[AgentExperience]] = defaultdict(list)
self.last_step_result: Dict[str, Tuple[DecisionStep, int]] = {}
# current_group_obs is used to collect the current, most recently seen
# obs of all the agents in the same group, and assemble the group obs.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
# obs of all the agents in the same group, and assemble the group obs.
# obs of all the agents in the same group, and assemble the group obs.
# It is a dictionary of group_id to dictionaries of agent_id to observations

lambda: defaultdict(list)
)
# group_status is used to collect the current, most recently seen
# group status of all the agents in the same group, and assemble the group obs.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
# group status of all the agents in the same group, and assemble the group obs.
# group status of all the agents in the same group, and assemble the group obs.
# It is a dictionary of group_id to dictionaries of agent_id to AgentStatus

I think these could be made clearer if we had a GroupId and AgentId type that are just strings. A bit like what we do here

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like this idea. Went ahead and added a GlobalGroupId, GlobalAgentId which are both strings in behavior_id_utils.py, and GroupId in base_env that is an int. cc: @dongruoping

@@ -49,6 +50,16 @@ def __init__(
"""
self.experience_buffers: Dict[str, List[AgentExperience]] = defaultdict(list)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think a bunch of these new fields (and some of the old fields as well) should be made private.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good call, made the ones that weren't used outside private

@@ -112,21 +133,75 @@ def add_experiences(
[_gid], take_action_outputs["action"]
)

def _add_to_group_status(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks like this method also modifies self.current_group_obs, the comment and the name of the method should reflect that.

self.group_status[global_group_id][global_agent_id] = group_status
self.current_group_obs[global_group_id][global_agent_id] = step.obs

def _clear_group_obs(self, global_id: str) -> None:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This does not only clear the obs but also the status the name and comment should reflect that.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated

self._delete_in_nested_dict(self.current_group_obs, global_id)
self._delete_in_nested_dict(self.group_status, global_id)

def _delete_in_nested_dict(self, nested_dict: Dict[str, Any], key: str) -> None:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Make this a static or utils method. _safe_delete should also be static.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I personally don't think we should make the method static unless it's used elsewhere. Should we make a new place for utils like this?

action: ActionTuple
done: bool


class AgentExperience(NamedTuple):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think making AgentExperience inherit from or have an AgentStatus is a good idea.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After some conversation with @chriselion we decided to keep them separate, at least between the DemonstrationProvider and here.

I'd prefer inheritance but it seems like a rabbit hole that would warrant a deeper think, so my vote is not to do this here (it's already big enough): http://zecong.hu/2019/08/10/inheritance-for-namedtuples/

Ideally I think these should be DataClasses and not named tuples, but until we drop Python 3.6 we won't be able to use them.

@ervteng ervteng merged commit 2a26887 into main Mar 4, 2021
@delete-merged-branch delete-merged-branch bot deleted the develop-agentprocessor-teammanager branch March 4, 2021 19:25
philippds added a commit to philippds-forks/ml-agents that referenced this pull request Mar 5, 2021
* [MLA-1768] retrain Match3 scene (Unity-Technologies#4943)

* improved settings and move to default_settings

* update models

* Release 13 versions. (Unity-Technologies#4946)

- updated release tag validation script to automate the updating of files with release tags that need to be changed as part of the pre-commit operation.

* Update readme for release_13. (Unity-Technologies#4951)

* Update docs to pass doc validation. (Unity-Technologies#4953)

* update defines, compile out Initialize body on non-desktop (Unity-Technologies#4957)

* Masking Discrete Actions typos (Unity-Technologies#4961) (Unity-Technologies#4964)

Co-authored-by: Philipp Siedler <p.d.siedler@gmail.com>

* Adding references to the Extensions package to help promote it. (Unity-Technologies#4967) (Unity-Technologies#4968)

Co-authored-by: Marwan Mattar <marwan@unity3d.com>
Co-authored-by: Chris Elion <chris.elion@unity3d.com>

* Fix release link validations. (Unity-Technologies#4970)

* Adding the Variable length observation to the readme and to the overview of ML-Agents

* -

* forgot a dot

* InputActuatorComponent to allow the generation of an action space from an InputActionAsset (Unity-Technologies#4881) (Unity-Technologies#4974)

* handle no plugins found (Unity-Technologies#4969) (Unity-Technologies#4973)

* Tick extension version. (Unity-Technologies#4975)

* adding a comic and readding removed feaures docs

* Update 2018 project version to fix burst errors. (Unity-Technologies#4977) (Unity-Technologies#4978)

* Add an example project for the InputSystemActuator. (Unity-Technologies#4976) (Unity-Technologies#4980)

* Update barracuda, swtich Agents in Sorter use Burst. (Unity-Technologies#4979) (Unity-Technologies#4981)

* Set ignore done=False in GAIL (Unity-Technologies#4971)

* MultiAgentGroup Interface (Unity-Technologies#4923)

* add SimpleMultiAgentGroup

* add group reward field to agent and proto

* Fix InputActuatorComponent tests asmdef. (Unity-Technologies#4994) (Unity-Technologies#4995)

* Fix asmdef? (Unity-Technologies#4994) (Unity-Technologies#4996)

* Make TrainingAnalyticsSideChannel internal (Unity-Technologies#4999)

* [MLA-1783] built-in actuator type (Unity-Technologies#4950)

* Add component menues for some sensors and actuators. (Unity-Technologies#5001)

* Add component menues for some sensors and actuators. (Unity-Technologies#5001) (Unity-Technologies#5002)

* Fixing the number of layers in the config of PyramidsRND

* Merge master -> release_13_branch-to-master

* Fix RpcCommunicator merge.

* Move the Critic into the Optimizer (Unity-Technologies#4939)

Co-authored-by: Ervin Teng <ervin@unity3d.com>

* master -> main. (Unity-Technologies#5010)

* Adding links to CV/Robotics/GameSim (Unity-Technologies#5012)

* Make GridSensor a non allocating object after initialization. (Unity-Technologies#5014)

Co-authored-by: Chris Elion <chris.elion@unity3d.com>

* Modified the model_serialization to have correct inputs and outputs

* switching from CamelCase to snake_case

* Fix gpu pytests (Unity-Technologies#5019)

* Move tensors to cpu before converting it to numpy

* Adding a name field to BufferSensorComponent

* Adding a note to the CHANGELOG about var len obs

* Adding a helper method for creating observation placeholder names and removed the _h and _c placeholders

* Adding a custom editor for BufferSensorComponent

* Changing Sorter to use the new Editor and serialization

* adding inheritdoc

* Update cattrs dependencies to support python3.9 (Unity-Technologies#4821)

* Python Dataflow for Group Manager (Unity-Technologies#4926)

* Make buffer type-agnostic

* Edit types of Apped method

* Change comment

* Collaborative walljump

* Make collab env harder

* Add group ID

* Add collab obs to trajectory

* Fix bug; add critic_obs to buffer

* Set group ids for some envs

* Pretty broken

* Less broken PPO

* Update SAC, fix PPO batching

* Fix SAC interrupted condition and typing

* Fix SAC interrupted again

* Remove erroneous file

* Fix multiple obs

* Update curiosity reward provider

* Update GAIL and BC

* Multi-input network

* Some minor tweaks but still broken

* Get next critic observations into value estimate

* Temporarily disable exporting

* Use Vince's ONNX export code

* Cleanup

* Add walljump collab YAML

* Lower max height

* Update prefab

* Update prefab

* Collaborative Hallway

* Set num teammates to 2

* Add config and group ids to HallwayCollab

* Fix bug with hallway collab

* Edits to HallwayCollab

* Update onnx file meta

* Make the env easier

* Remove prints

* Make Collab env harder

* Fix group ID

* Add cc to ghost trainer

* Add comment to ghost trainer

* Revert "Add comment to ghost trainer"

This reverts commit 292b6ce.

* Actually add comment to ghosttrainer

* Scale size of CC network

* Scale value network based on num agents

* Add 3rd symbol to hallway collab

* Make comms one-hot

* Fix S tag

* Additional changes

* Some more fixes

* Self-attention Centralized Critic

* separate entity encoder and RSA

* clean up args in mha

* more cleanups

* fixed tests

* entity embeddings work with no max
Integrate into CC

* remove group id

* very rough sketch for TeamManager interface

* One layer for entity embed

* Use 4 heads

* add defaults to linear encoder, initialize ent encoders

* add team manager id to proto

* team manager for hallway

* add manager to hallway

* send and process team manager id

* remove print

* small cleanup

* default behavior for baseTeamManager

* add back statsrecorder

* update

* Team manager prototype  (Unity-Technologies#4850)

* remove group id

* very rough sketch for TeamManager interface

* add team manager id to proto

* team manager for hallway

* add manager to hallway

* send and process team manager id

* remove print

* small cleanup

Co-authored-by: Chris Elion <chris.elion@unity3d.com>

* Remove statsrecorder

* Fix AgentProcessor for TeamManager
Should work for variable decision frequencies (untested)

* team manager

* New buffer layout, TeamObsUtil, pad dead agents

* Use NaNs to get masks for attention

* Add team reward to buffer

* Try subtract marginalized value

* Add Q function with attention

* Some more progress - still broken

* use singular entity embedding (Unity-Technologies#4873)

* I think it's running

* Actions added but untested

* Fix issue with team_actions

* Add next action and next team obs

* separate forward into q_net and baseline

* might be right

* forcing this to work

* buffer error

* COMAA runs

* add lambda return and target network

* no target net

* remove normalize advantages

* add target network back

* value estimator

* update coma config

* add target net

* no target, increase lambda

* remove prints

* cloud config

* use v return

* use target net

* adding zombie to coma2 brnch

* add callbacks

* cloud run with coma2 of held out zombie test env

* target of baseline is returns_v

* remove target update

* Add team dones

* ntegrate teammate dones

* add value clipping

* try again on cloud

* clipping values and updated zombie

* update configs

* remove value head clipping

* update zombie config

* Add trust region to COMA updates

* Remove Q-net for perf

* Weight decay, regularizaton loss

* Use same network

* add base team manager

* Remove reg loss, still stable

* Black format

* add team reward field to agent and proto

* set team reward

* add maxstep to teammanager and hook to academy

* check agent by agent.enabled

* remove manager from academy when dispose

* move manager

* put team reward in decision steps

* use 0 as default manager id

* fix setTeamReward

Co-authored-by: Vincent-Pierre BERGES <vincentpierre@unity3d.com>

* change method name to GetRegisteredAgents

* address comments

* Revert C# env changes

* Remove a bunch of stuff from envs

* Remove a bunch of extra files

* Remove changes from base-teammanager

* Remove remaining files

* Remove some unneeded changes

* Make buffer typing neater

* AgentProcessor fixes

* Back out trainer changes

* use delegate to avoid agent-manager cyclic reference

* put team reward in decision steps

* fix unregister agents

* add teamreward to decision step

* typo

* unregister on disabled

* remove OnTeamEpisodeBegin

* change name TeamManager to MultiAgentGroup

* more team -> group

* fix tests

* fix tests

* Use attention tests from master

* Revert "Use attention tests from master"

This reverts commit 78e052b.

* Use attention from master

* Renaming fest

* Use NamedTuples instead of attrs classes

* Bug fixes

* remove GroupMaxStep

* add some doc

* Fix mock brain

* np float32 fixes

* more renaming

* Test for team obs in agentprocessor

* Test for group and add team reward

* doc improve

Co-authored-by: Ervin T. <ervin@unity3d.com>

* store registered agents in set

* remove unused step counts

* Global group ids

* Fix Trajectory test

* Remove duplicated files

* Add team methods to AgentAction

* Buffer fixes

(cherry picked from commit 2c03d2b)

* Add test for GroupObs

* Change AgentAction back to 0 pad and add tests

* Addressed some comments

* Address some comments

* Add more comments

* Rename internal function

* Move padding method to AgentBufferField

* Fix slicing typing and string printing in AgentBufferField

* Fix to-flat and add tests

* Rename GroupmateStatus to AgentStatus

* Update comments

* Added GroupId, GlobalGroupId, GlobalAgentId types

* Update comment

* Make some agent processor properties internal

* Rename add_group_status

* Rename store_group_status, fix test

* Rename clear_group_obs

Co-authored-by: Andrew Cohen <andrew.cohen@unity3d.com>
Co-authored-by: Ruo-Ping Dong <ruoping.dong@unity3d.com>
Co-authored-by: Chris Elion <chris.elion@unity3d.com>
Co-authored-by: andrewcoh <54679309+andrewcoh@users.noreply.github.com>
Co-authored-by: Vincent-Pierre BERGES <vincentpierre@unity3d.com>

* Removing some scenes (Unity-Technologies#4997)

* Removing some scenes, All the Static and all the non variable speed environments. Also removed Bouncer, PushBlock, WallJump and reacher. Removed a bunch of visual environements as well. Removed 3DBallHard and FoodCollector (kept Visual and Grid FoodCollector)

* readding 3DBallHard

* readding pushblock and walljump

* Removing tennis

* removing mentions of removed environments

* removing unused images

* Renaming Crawler demos

* renaming some demo files

* removing and modifying some config files

* new examples image?

* removing Bouncer from build list

* replacing the Bouncer environment with Match3 for llapi tests

* Typo in yamato test

* Fix issue with queuing input events that stomp on others. (Unity-Technologies#5034)

Co-authored-by: Chris Elion <chris.elion@unity3d.com>
Co-authored-by: Chris Goy <christopherg@unity3d.com>
Co-authored-by: Marwan Mattar <marwan@unity3d.com>
Co-authored-by: vincentpierre <vincentpierre@unity3d.com>
Co-authored-by: andrewcoh <54679309+andrewcoh@users.noreply.github.com>
Co-authored-by: Ruo-Ping Dong <ruoping.dong@unity3d.com>
Co-authored-by: Ervin Teng <ervin@unity3d.com>
Co-authored-by: Andrew Cohen <andrew.cohen@unity3d.com>
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Mar 4, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants