PyFlyt/QuadX-Pole-Waypoints-v3#

https://raw.githubusercontent.com/jjshoots/PyFlyt/master/readme_assets/quadx_pole_waypoint.gif

Task Description#

The goal of this environment is to fly a quadrotor aircraft towards a set of waypoints as fast as possible while balancing a 1 meter long pole.

Usage#

import gymnasium
import PyFlyt.gym_envs

env = gymnasium.make("PyFlyt/QuadX-Pole-Waypoints-v3", render_mode="human")

term, trunc = False, False
obs, _ = env.reset()
while not (term or trunc):
    obs, rew, term, trunc, _ = env.step(env.action_space.sample())

Flattening the Environment#

This environment uses the Dict and Sequence spaces from Gymnasium, which are spaces with non-constant sizes. This allows them to have complete observability without observation padding while making (human-)readability easier. However, this results in them not being compatible with most popular reinforcement learning libraries like Stable Baselines 3 without custom wrappers. If you would like to use this environment with those libraries, you can flatten the environment using the FlattenWaypointEnv wrapper, where the argument context_length specifies how many immediate targets are included in the observation.

import gymnasium
import PyFlyt.gym_envs
from PyFlyt.gym_envs import FlattenWaypointEnv

env = gymnasium.make("PyFlyt/QuadX-Pole-Waypoints-v3", render_mode="human")
env = FlattenWaypointEnv(env, context_length=2)

term, trunc = False, False
obs, _ = env.reset()
while not (term or trunc):
    obs, rew, term, trunc, _ = env.step(env.action_space.sample())

Environment Options#

class PyFlyt.gym_envs.quadx_envs.quadx_pole_waypoints_env.QuadXPoleWaypointsEnv(sparse_reward: bool = False, num_targets: int = 4, goal_reach_distance: float = 0.2, flight_mode: int = -1, flight_dome_size: float = 10.0, max_duration_seconds: float = 20.0, angle_representation: Literal['euler', 'quaternion'] = 'quaternion', agent_hz: int = 40, render_mode: None | Literal['human', 'rgb_array'] = None, render_resolution: tuple[int, int] = (480, 480))#

QuadX Pole Waypoints Environment.

Actions are direct motor PWM commands because any underlying controller introduces too much control latency. The target is to get to a set of [x, y, z] waypoints in space without dropping the pole.

Parameters:
  • sparse_reward (bool) – whether to use sparse rewards or not.

  • num_targets (int) – number of waypoints in the environment.

  • goal_reach_distance (float) – distance to the waypoints for it to be considered reached.

  • flight_mode (int) – the flight mode of the UAV.

  • flight_dome_size (float) – size of the allowable flying area.

  • max_duration_seconds (float) – maximum simulation time of the environment.

  • angle_representation (Literal["euler", "quaternion"]) – can be “euler” or “quaternion”.

  • agent_hz (int) – looprate of the agent to environment interaction.

  • render_mode (None | Literal["human", "rgb_array"]) – render_mode

  • render_resolution (tuple[int, int]) – render_resolution.