E03: Shopping Cart

SciRoc 2021 will be a physical event taking place in Bologna, Italy. Teams are invited to join us in Bologna in person to participate in the competition. Mitigation plans are in place for teams that cannot travel due to COVID related travel restrictions.

This episode will be part of the 2021 competition. For a detailed and up to date description please refer to the official rulebook. You are invited to comment directly on to this document with any questions and/or feedback. Our dedicated team of editors will review and respond to all feedback.

 

General description.

Episode 3 aims at evaluating the capability of a robot to interact with one challenging device found in environments designed for humans: the shopping cart.

Episode 3 is a joint effort between European projects SciRoc and Eurobench. Episode 3 corresponds in fact to the BEAST (Benchmark-Enabling Active Shopping Trolley) Eurobench benchmark for mobile robots.
The physical elements of the Episode are its testbed. The BEAST testbed comprises two main elements:

  • the arena, i.e. the passive structured environment where the Episode takes place;
  • the active shopping cart, i.e. a robotised shopping cart capable of behaviours which include, but are not limited to, those of a standard cart.

The aim of the BEAST benchmark is to evaluate the capability of a mobile robot to correctly manoeuvre a wheeled device (shopping cart or walker). In this context, “correctly manoeuvre” involves performance metrics concerning geometry (such as staying close to a predefined trajectory or maximising distance from obstacles), time, force (e.g., smooth application of force to the handle over time instead of abrupt bursts). The BEAST benchmark makes use of the capability of the active element (cart or walker) of the testbed to simulate physical disturbances that occur in real-world scenarios, such as unevenly distributed wheel friction or small obstacles on the ground (such as a low step or a pebble).

Platforms Allowed.
The BEAST benchmark is suitable for any autonomous mobile agent that is physically capable of operating the active trolley or walker in the arena. As such, it can be executed by any robot provided with an end-effector(s) suitable for interaction with the active cart’s handle, including (but not limited to) humanoid robots. Execution of the benchmark by humans, while not explicitly considered, is nonetheless fully possible: thus making it possible to collect “human baseline” datasets to be compared with those generated by robots.

Setup.
The setup is composed of an arena and an active (i.e., robotised) cart, plus a few additional pieces of equipment for network and processing. Setup is simplified and even if performed from scratch (with the arena broken down to its components) should not take more than a workday for an experienced operator.

Since the shopping cart is not designed to be disassembled, the only piece of the testbed that requires physical assembling is the arena. The walls of the arena are composed of 1 m tall panels.

The arena includes two apertures. One of them goes down to the ground, and is used for entry and exit; the other is a cutout in the surrounding wall. The orange pillars in the rendering are heavy PVC tubes for construction, while the white panels composing the walls of the arena are made of low-density PVC, similar to the commercial product called Forex. Structural support is provided to the arena by a rigid frame built from modular aluminium profiles with a 40 x 40 mm square cross section. Profile elements include an 8 mm wide groove, used both to insert mechanical joints (e.g, those that connect the profiles, described below) and to guide and contain the edge of the wall elements. In order to accommodate possible unevenness of the floor over the (quite extensive) perimeter of the arena, the whole structure is supported by adjustable feet. Below are images of the arena and of the active shopping cart.


Episode 3 Arena Set up and Shopping Cart


 

Procedure.
While executing the benchmark, the shopping cart (pushed by the robot) has to follow the trajectory shown in the image on the right. Such trajectory is composed of four parts: Episode 3 Trajectory

  1. A first section where the cart should proceed along a straight line;
  2. A second section where the cart is required to invert its direction of motion by going around a pillar (no specification are given on trajectory shape here: however, if planning is not done correctly -considering the differential drive kinematics of the front wheels of the trolley- the execution of this part of the benchmark can easily require backtracking);
  3. A third section that requires that the cart passes between the same couple of pillars where the first section had its start, but in the opposite direction (no constraints are given to trajectory shape except that the cart must pass between the pillars);
  4. A fourth section where the cart needs to be “parked” in the corner of the test bed.

Episode 3 Checkpoints


More specifically, the phases of the protocol of the Episode are defined with respect to the numbered Checkpoints (line segments) in the image above. Such phases are:

  1. The trolley is positioned in the start pose, where the central point between its front wheels (i.e., the actuated ones) is in the centre of Checkpoint 1. This phase is human-executed and not part of the benchmark. The robot is positioned inside the area of the test bed, between the trolley and the entrance to the test bed, in front of the centre point of the trolley’s handle, at a sufficient distance from it that in order to reach the handle the robot has to move its whole body with respect to the ground. The referees are in charge of defining such distance.
  2. The benchmark is started.
  3. The robot has to grasp the trolley’s handle. The robot can optionally start with its end effectors already positioned on the handle. In that case this step is human-executed before the start of the benchmark (step 2).
  4. The robot has to navigate, while pushing the trolley, through all checkpoints in the correct order (from 1 to 5).
  5. The benchmark reaches its completion: this happens in three cases (see section “Scoring” for details): (i) the benchmark gets manually interrupted by the referees (e.g., because the robot executed a Disqualifying Behaviour; (ii) the robot collected the Achievement A6; (iii) the benchmark’s timeout has been reached.

Simulated Episode Competition.
The Simulated version of this episode can be used by teams who are unable to travel to the event due to COVID related restrictions. All other teams should participate onsite at the physical event in Bologna. The Episode, in fact, will be organised in such a way that both onsite and simulated executions of the Episode are evaluated seamlessly. The simulation environment is built using the Gazebo simulator and ROS framework.

Participants will access the simulated robot (both to read sensor observation and command the robot actuators) by coding a ROS node wrapper that connects their approach to the Gazebo plugin of the robot. Furthermore, the simulated episode suite will be released with a docker image that can be easily executed. The docker image will deliver all the features described in the Setup section, and it will configure the sensor data flow on the same channels of the real-robot onsite scenario (e.g. ROS topics, data format). Such a setup allows to transparently and interchangeably switch between the real and simulated execution of the episode which is mandatory to preserve the robotic task as described in the Procedure section; and the bench marking system described in the Scoring section.

The figure below shows the simulated environment running in a docker container. The blue shaded area is the laser sensor visualisation while on the bottom-right corner the robot camera stream. In this scenario, the simulated PAL robot Reem-C is operated by relying upon the dedicated Gazebo plugin released by the robot manufacturer.


Episode 3 Simulated Environment


Scoring.

The scoring mechanism for this Episode complies with the framework used by the European Robotics League for Task Benchmarks. As such, it is based on the three sets of Achievements, Penalising Behaviours and Disqualifying Behaviours described in the following.

Achievements.
A1 The robot has autonomously grasped the trolley’s handle, without any human intervention.
A2 The trolley has reached Checkpoint 1. The trajectory of the trolley during this part of the benchmark should be as close to parallel to the side wall as possible. Assessment of deviations does not affect the Achievements, but will be used to break ties.
A3 The trolley has reached Checkpoint 2.
A4 The trolley has reached Checkpoint 3.
A5 The trolley has reached Checkpoint 4.
A6 The trolley has reached Checkpoint 5.

Achievements can be collected only in the specified order.
Achievements A2…A6 require that the trolley, pushed by the robot, reaches specific checkpoints. The definition of “Reaching a checkpoint” is the following:

  • The trolley reaches a checkpoint when the projection on the floor of the central point between its two front wheels (i.e., the actuated wheels) crosses the line segment corresponding to the checkpoint, without having crossed any other checkpoints before.

Let CP be the checkpoint that an Achievement is associated to. If the trolley reaches any checkpoint different from CP before it reaches CP, the benchmark gets interrupted by the referees and no further Achievements can be collected.
When the execution of the benchmark gets interrupted in this way, the duration of its execution is conventionally set to the full allowed duration up to the timeout (this is done to avoid unfair advantage to robots that provoke an interruption, since execution time is used to break ties).
For Achievements A2…A6 no constraints are imposed to trajectory shape beyond those imposed by the definition of “reaching a checkpoint”.

Penalising Behaviours.
PB1 – The robot hits any piece of the benchmark setup (e.g., a part of the door assembly) without inflicting damage. An exception is made for the elements of the shopping cart, which can be hit without penalty (provided that they do not get damaged).
PB2 – The robot falls down and requires that the robot’s team performs a manual get-up.

For the definition of the Penalising Behaviours (and, later, of the Disqualifying Behaviours) the following definitions of “damage” and “hit” are used:

A damage is any geometric change that does not revert spontaneously within 10 seconds, even if the change would be reversible through suitable intervention (e.g., a wall panel is pushed out of its normal position)
A hit is a physical contact event where the robot applies forces sufficient to cause damage (even if no damage occurs): referees are in charge of deciding what physical contacts must be classified as hits.

An additional penalising behaviour is inflicted to the team every time the corresponding event occurs, even if the team has already been penalised for the very same behaviour.

Disqualifying Behaviours.
DB1 – The robot hits a human (e.g., a referee).
DB2 – The robot damages any piece of the benchmark setup.
DB3 – The robot exits the test bed area[1].
DB4 – The trolley moves for a duration of 5 s or more while the robot is continuously in contact with any part of it different from the handle, OR the overall time that the trolley moves while the robot is in contact with any part of it different from the handle is over 30 s.

[1] The robot is considered as outside the testbed area when none of its elements has a projection on the floor that lies inside the 3 m by 6 m rectangular area of the floor which contains the testbed.

Ties.
It may happen that two robots get the exact same number of Achievements and the exact same number of Penalising Behaviours, without incurring into any Disqualifying Behaviours. In this case, to avoid ties, the following criterium is applied:

tie-breaking criterium: the performance of the robot which finished the benchmark in a shorter time is considered as better.

We're taking a look back at all the action from #SciRoc2021 in Bologna! 🇮🇹🤖 #Robotics #AI #LiveEvent https://www.youtube.com/watch?v=vFDZLYMUmvU

Check out the full highlights from #SciRoc2021 🤖🇮🇹 A big thank you again to everyone who joined us and to the city of Bologna and @Unibo for hosting us! Until next time... https://sciroc.org/video-2021-sciroc-challenge/

Load More...