When the challenge is first released every year, our team follows a very effective and thought out build process which helps us stay organized and be successful through the season. This process can be broken down into four main stages: analyzing the game, strategy, build requirements, and prototyping.

Analyzing the game:

Analyzing the game serves as the foundation for the following three stages. Without this, we would not know what our robot needs to accomplish and therefore, we would be uncertain what kind of robot needs to be built.

Each year, our team starts by looking at the different tasks on the field and figuring out what needs to be done. For example, this year, we separated the field into 4 different tasks:

  • Toggling flags
  • Flipping caps on the ground
  • Putting caps on the poles
  • Parking

We then analyze each of the tasks and think of the different ways to complete it.


After the whole field and different tasks are analyzed, we start thinking about our strategy for the season. A substantial part of this step is analysing point tradeoffs. So, what can we sacrifice and what does our robot need to do in order to achieve a certain amount of points.

Generally, our team uses the VCR hub mobile app to recreate different game scenarios. For example, we experiment with the number of flags a robot would need to hit if it were not able to flip caps, or the point difference that comes with parking on the middle platform. Getting familiar with these potential scenarios allows our team to further discuss what tasks we think our robot should be able to accomplish.

Our team strategy is crucial to our robot performance and plays the roll of our game plan. We also update our strategy when we make new discoveries in the following build requirements and prototyping steps.

Build Requirements:

Once we have established an effective team strategy, we move on to determine our robot’s build requirements.

Drawing on the previous two steps, we review our game strategy and analyzation of the game field. We then begin making a list of requirements to follow in order to ensure that the robot we build is effective, efficient and able to complete desired tasks without failure. An example of a build requirement is that the robot needs to be able to intake a ball while simultaneously be able to shoot another ball. As well, the requirements could be that the robot needs to climb platforms or have the shooter at a specific height. These build requirements serve both as guidelines and restrictions further on in the prototyping stage, to help prevent future errors and ultimately save us valuable time.


Finally, after working through the three prior steps, our team starts prototyping different ideas we have in mind for specific parts of the robot. After building each prototype, we test it many times and make modifications along to way to improve it. During this stage, we constantly refer back to our build requirements and strategy. Furthermore, we ask questions such as, “Is this prototype feasible?” and “How can we make it better?”. This prototyping step is actually the beginning of a build cycle, where we discover what doesn’t work, review our build requirements, modify or recreate a prototype and repeat the cycle to optimize the prototype and its functionalities.


The final part of the build cycle is combining all the different prototypes into one moving, functional robot. We then test how the robot works as a whole with completing different tasks on the field. Similar to the prototyping stage, we evaluate the robot to see how it fits in with our strategy, and make changes to improve the robot. During this stage, we also streamline the robot, trying to make it as compact and as light as possible. Throughout our whole season, we repeat this cycle many times to constantly refine our robot as new problems or ideas arise.

In previous years, on tournament days, we often realized that there were several gaps in our scouting process. A flawed scouting process could lead to miscommunication between the scouts and the drive team, which in turn could lead to losses for our team. We’ve now created a process dubbed ‘The 3 C’s of Scouting’ to streamline our strategy section at competition.

Collection of Data

Watching Matches and Creating Scouting Sheets

The collection of data is what fuels the entire scouting process. Without any data to analyze, the rest of the system would not be able to function. Collection is split up into two different parts:

  • Scouting Potential Alliance Picks
  • Scouting Teams We are Playing Against

At the beginning of every tournament, we look at the schedule and divide the matches we are playing in among the scouts. Generally, each scout has 2-3 matches, which means they are scouting 6-9 teams. We scout our two opponents as well as our alliance. The opponents are scouted so that we know what strategies to create/use in order to beat them. Our alliance is scouted so we know their abilities and how compatible they are with us.

Scouting Potential Alliance Picks

For each team, there are two designated scouts who sit in the stands all day, simply looking for potential alliance picks. If the scout sees a potential alliance pick, the team is immediately added to the master list of all teams to scout. This list consists of teams we are playing against, as well as potential alliance picks, or even teams we think we will be playing in elimination rounds

Scouting Sheets

The scouting sheets contain useful information on teams as well as the strategies created by scouts. Information on scouting sheets includes:

  • The type of Robot
  • The Robot’s Auto
  • Where their robot is able to flip caps and shoot flags from (how many and how quickly)

It also contains the scouts strategies such as:

  • Which Auto we should be running to ensure we win the autonomous period?
  • Where do we place the caps to give them the most difficulty in flipping them?
  • Which robot should we be focusing on blocking at
  • Should we be seeding?

Comprehension of Data

Creating a Strategy and Analysis of Data

After all scouting sheets are filled out and the data is collected, the scouts then create a strategy to use against them. Often this is a predetermined strategy, as mentioned above, to go against a certain type of robot. Occasionally, there will be a team whose robot design is out of the ordinary, or whose auto is exceptional. For those robots, strategies are created on the spot, in collaboration with the predesignated scouts who have been watching every single match and have most likely seen the robot play. These designated scouts should be experts on every robot they scout.

Skill Ranking of Teams

While scouts are watching potential alliance picks, they are assigned a numerical value for

several different categories, including:

  • Overall Ranking
  • Drive Speed (1-10)
  • Cap flipping speed(1-3)
  • Cap descoring Capability* (1-3)
  • Parking Capability* (1-3)

*Speed of ability compared to our robot, 1 being slower than us, 3 being faster than us. Using these rankings, we should be able to easily tell which team should be chosen in alliance selection and create an order of priority when picking alliances.

Notes Taken Throughout The Day

While watching alliance picks, not only will a scouting a sheet be created, an in-depth strategy will be created, as well as their autonomous route filmed for later analysis. The strategy will consist of the best ways to combat their driving style and how to beat them in a match, assuming we have no alliance partner.

Communication of Data

Relaying Information to Drive team

After all strategies are created and data is collected, information is relayed to the drive team. On the drive team, there is one designated member to communicate between the drive team and the scouts. This person is called the tactician, they also relay information to the driver during matches, based on strategies created by the scouts. It is important that the scouts relay all information on the scouting sheets as well as any strategies they have created. They should be able to warn the drive team whether or not it will be a close match and whether or not to focus on seeding, if at all.

What The Drive Team Should Be Told

  • Which autonomous path to take
  • Type of robot
  • Strategy to beat the robot
    • Where to place caps
    • Should we be flipping or shooting?
    • Should we be descoring caps?
  • Will it be a close game?
    • Should we be focusing on seeding?
  • Shooting speed
  • Cap flipping speed
  • Any information that is out of that is not already included in our preset robot types


The purpose of this document is to outline the process we follow and minimize the likelihood of software failure at competitions. This was deemed necessary after the In The Zone Ontario Provincial Championships when multiple bugs, all of which could have easily been discovered in a code review, directly affected our qualification ranking.

Part 1 – Code Libraries

The code libraries are all parts of the code that are robot and season independent. These are a relatively small group of utilities and algorithms that have a significant usage footprint throughout the code. As a result, any issues in this code would likely affect many, if not all functions of the robot, thus making this the most important section to get right.

The following types of problems should be checked for in the core libraries:

  • Array bounds issues: where appropriate, bounds should be checked. It should not be possible to inadvertently cause an array bounds exception via otherwise sensible arguments.
  • Infinite loops/recursion: all loops and recursive algorithms with complex end conditions should perform checks to ensure that the loop does not continue indefinitely.
  • Arithmetic/logical failures: appropriate checks should be performed to prevent the possibility of division by zero, and to ensure that all outputs of calculations and other application logic are rational.
  • Inefficiency: care should be taken to ensure that, without sacrificing maintainability or simplicity beyond reason, there are no avoidable inefficiencies in the implementation; this could even include modifying the interface to provide the component in question with more nuanced contextual information.

Part 2 – Autonomous Motion Algorithms

The various procedures that control the drive during autonomous mode using the position tracking system comprise the autonomous motion algorithms. These algorithms are central to both game autonomous and programming skills, and are also used occasionally in driver control code; each algorithm provides the capability of a different type of motion.

The following types of problems should be checked for in the autonomous motion algorithms:

  • Improper handling of choose mode: for turning algorithms, the “choose mode” should properly determine the shortest direction to turn, taking into account equivalent angles.
  • Rapid fluctuation in motor: frequent significant changes to motor power, especially involving a change in direction, can cause physical damage to the robot and should be avoided through techniques such as slewing.
  • Uncertain end conditions: under normal operation, the motion should always reach its end condition in reasonable time, without oscillation about the target; this must be guaranteed both by ensuring the logical soundness of the condition, and by accounting for very close error either through a dynamic end condition or through minimum output power, etc.

Part 3 – Main Subsystem Controllers

The most complex section of the code is the group of procedural algorithms, mostly encoded into large state machines, that are responsible for controlling the various physical subsystems of the robot, as well as the automation of stacking. This code is used by both driver control and autonomous routines, and its review requires both analysis of logic and tuning of threshold sensor values.

The following types of problems should be checked for in the main subsystem controllers:

  • Dead-ends in state machine control flow: in each of the control state machines, the high-level control logic will often be implemented by states automatically transitioning to other states upon completion of their task; it should not be possible for the state machine to get stuck in a state wherein the driver does not have control of the robot.
  • Multiple tasks controlling the same output: the motors for a particular physical subsystem that has an associated state machine should only by set from outside the state machine if it is in a dedicated “managed” state.
  • Retention of old driver instructions: in some cases, particularly with stacking, global variables are set based on the driver’s input, in order to control the exact actions that are performed by the controller; these global variable should be reset once the action is complete, to ensure that they do not inadvertently affect future actions.
  • Improper bit-masking of complex state machine arguments: the stacking state machine in particular expects an argument that consists of a combination of bit-flags and numerical values at specific offsets; care should be taken to ensure that these arguments are are properly decoded when the individual components are required.

Part 4 – Autonomous Routines

The final section of the code comprises the game autonomous and programming skills routines, which are implemented as large procedural functions. This code mostly consists of calls to autonomous motion algorithms and associated timeouts, but also makes use of the main subsystem controllers.

The following types of problems should be checked for in the autonomous routines:

  • Illogical or invalid parameters passed to autonomous motion algorithms: due to the rather large parameter lists for some of these algorithms, a certain rate human error in assigning values is expected; this should be avoided by checking each and every use of the algorithms to ensure that the values make sense, both in terms of expected data type/range, and in context of the intended physical motion of the robot.
  • Short timeouts: often, the amount of time taken my various actions will be different between practice and competition, for reasons that we cannot control; thus it is imperative that every timeout is set to at least 400 ms more than the typical time in practice.
  • Incorrect order of instructions: due to the partial parallelization of actions resulting from using asynchronous functions for nearly all instructions in autonomous routines, it becomes easy to accidentally.