gym_ignition.runtimes

gym_ignition.runtimes.gazebo_runtime

class gym_ignition.runtimes.gazebo_runtime.GazeboRuntime(task_cls, agent_rate, physics_rate, real_time_factor, physics_engine=0, world=None, **kwargs)

Bases: gym_ignition.base.runtime.Runtime

Implementation of Runtime for the Ignition Gazebo simulator.

Parameters
  • task_cls (type) – The class of the handled task.

  • agent_rate (float) – The rate at which the environment is called.

  • physics_rate (float) – The rate of the physics engine.

  • real_time_factor (float) – The desired RTF of the simulation.

  • physics_engine(optional) The physics engine to use.

  • world (Optional[str, None]) – (optional) The path to an SDF world file. The world should not contain any physics plugin.

Note

Physics randomization is still experimental and it could change in the future. Physics is loaded only once, when the simulator starts. In order to change the physics, a new simulator should be created.

close()

Override close in your subclass to perform any necessary cleanup.

Environments will automatically close() themselves when garbage collected or when the program exits.

Return type

None

property gazebo: scenario.bindings.gazebo.GazeboSimulator
Return type

GazeboSimulator

metadata = {'render.modes': ['human']}
render(mode='human', **kwargs)

Renders the environment.

The set of supported modes varies per environment. (And some environments do not support rendering at all.) By convention, if mode is:

  • human: render to the current display or terminal and return nothing. Usually for human consumption.

  • rgb_array: Return an numpy.ndarray with shape (x, y, 3), representing RGB values for an x-by-y pixel image, suitable for turning into a video.

  • ansi: Return a string (str) or StringIO.StringIO containing a terminal-style text representation. The text can include newlines and ANSI escape sequences (e.g. for colors).

Note

Make sure that your class’s metadata ‘render.modes’ key includes

the list of supported modes. It’s recommended to call super() in implementations to use the functionality of this method.

Parameters

mode (str) – the mode to render with

Example:

class MyEnv(Env):

metadata = {‘render.modes’: [‘human’, ‘rgb_array’]}

def render(self, mode=’human’):
if mode == ‘rgb_array’:

return np.array(…) # return RGB frame suitable for video

elif mode == ‘human’:

… # pop up a window and render

else:

super(MyEnv, self).render(mode=mode) # just raise an exception

Return type

None

reset()

Resets the environment to an initial state and returns an initial observation.

Note that this function should not reset the environment’s random number generator(s); random variables in the environment’s state should be sampled independently between multiple calls to reset(). In other words, each call of reset() should yield an environment suitable for a new episode, independent of previous episodes.

Returns

the initial observation.

Return type

observation (object)

seed(seed=None)

Sets the seed for this env’s random number generator(s).

Note

Some environments use multiple pseudorandom number generators. We want to capture all such seeds used in order to ensure that there aren’t accidental correlations between multiple generators.

Returns

Returns the list of seeds used in this env’s random

number generators. The first value in the list should be the “main” seed, or the value which a reproducer should pass to ‘seed’. Often, the main seed equals the provided ‘seed’, but this won’t be true if seed=None, for example.

Return type

list<bigint>

step(action)

Run one timestep of the environment’s dynamics. When end of episode is reached, you are responsible for calling reset() to reset this environment’s state.

Accepts an action and returns a tuple (observation, reward, done, info).

Parameters

action (object) – an action provided by the agent

Returns

agent’s observation of the current environment reward (float) : amount of reward returned after previous action done (bool): whether the episode has ended, in which case further step() calls will return undefined results info (dict): contains auxiliary diagnostic information (helpful for debugging, and sometimes learning)

Return type

observation (object)

timestamp()

Return the timestamp associated to the execution of the environment.

In real-time environments, the timestamp is the time read from the host system. In simulated environments, the timestamp is the simulated time, which might not match the real-time in the case of a real-time factor different than 1.

Return type

float

Returns

The current environment timestamp.

property world: scenario.bindings.gazebo.World
Return type

World

gym_ignition.runtimes.realtime_runtime

class gym_ignition.runtimes.realtime_runtime.RealTimeRuntime(task_cls, robot_cls, agent_rate, **kwargs)

Bases: gym_ignition.base.runtime.Runtime

Implementation of Runtime for real-time execution.

Warning

This class is not yet complete.

close()

Override close in your subclass to perform any necessary cleanup.

Environments will automatically close() themselves when garbage collected or when the program exits.

Return type

None

render(mode='human', **kwargs)

Renders the environment.

The set of supported modes varies per environment. (And some environments do not support rendering at all.) By convention, if mode is:

  • human: render to the current display or terminal and return nothing. Usually for human consumption.

  • rgb_array: Return an numpy.ndarray with shape (x, y, 3), representing RGB values for an x-by-y pixel image, suitable for turning into a video.

  • ansi: Return a string (str) or StringIO.StringIO containing a terminal-style text representation. The text can include newlines and ANSI escape sequences (e.g. for colors).

Note

Make sure that your class’s metadata ‘render.modes’ key includes

the list of supported modes. It’s recommended to call super() in implementations to use the functionality of this method.

Parameters

mode (str) – the mode to render with

Example:

class MyEnv(Env):

metadata = {‘render.modes’: [‘human’, ‘rgb_array’]}

def render(self, mode=’human’):
if mode == ‘rgb_array’:

return np.array(…) # return RGB frame suitable for video

elif mode == ‘human’:

… # pop up a window and render

else:

super(MyEnv, self).render(mode=mode) # just raise an exception

Return type

None

reset()

Resets the environment to an initial state and returns an initial observation.

Note that this function should not reset the environment’s random number generator(s); random variables in the environment’s state should be sampled independently between multiple calls to reset(). In other words, each call of reset() should yield an environment suitable for a new episode, independent of previous episodes.

Returns

the initial observation.

Return type

observation (object)

step(action)

Run one timestep of the environment’s dynamics. When end of episode is reached, you are responsible for calling reset() to reset this environment’s state.

Accepts an action and returns a tuple (observation, reward, done, info).

Parameters

action (object) – an action provided by the agent

Returns

agent’s observation of the current environment reward (float) : amount of reward returned after previous action done (bool): whether the episode has ended, in which case further step() calls will return undefined results info (dict): contains auxiliary diagnostic information (helpful for debugging, and sometimes learning)

Return type

observation (object)

timestamp()

Return the timestamp associated to the execution of the environment.

In real-time environments, the timestamp is the time read from the host system. In simulated environments, the timestamp is the simulated time, which might not match the real-time in the case of a real-time factor different than 1.

Return type

float

Returns

The current environment timestamp.