Hello, we are Team 5225A, The E-Bots’ PiLons. This document is intended to provide an objective assessment of the current V5 beta system, which we have been testing for a few months. With a few necessary improvements, we believe that the V5 system can be an extremely useful platform for students and competitors of all skill levels.
One of the largest issues with the new V5 system is that you are limited to a single ADI with eight 3-wire ports. Even though the V5 brain has 21 external “smart ports” and eight 3-wire ports through the embedded ADI, the only devices that can be plugged into the smart ports are the V5 motor, the vision sensor, and the radio.
We understand that the V5’s motors include a built in encoders, they are a huge improvement from the IME’s that were available with the Cortex and will be useful in many scenarios. That being said, it does not completely erase the need for external encoders. One issue that an increasing number of teams are beginning to realize is that drive wheels slip when accelerating or decelerating. As a result, many teams are beginning to measure forwards & sideways movement using “tracking-wheels”: encoders on non-powered wheels that are pressed into the ground. This requires an absolute minimum of one or two encoders, ideally three for full position tracking. Since each encoder requires two ADI ports, this system that helps teams improve their autonomous accuracy, leaves them with only two available ports on the embedded ADI.
Furthermore, if students wish to use pneumatics, they are at an even larger loss as the digital relays also rely on legacy ADI ports.
It is clear that the V5’s lack of support for sensors inhibits any students trying to learn how to write effective programs. To improve this product’s educational and competitive value, this issue must be addressed soon.
We suggest that this issue can be fixed by releasing either an external ADI that plugs into a smart port and / or making additional sensors available that plug into the smart ports to replace many of the 3-wire sensors.
One of the main advantages of the V5 over the Cortex system is the increase in motor power. With each V5 motor having roughly two times the power of a 393 motor, the eight-motor limit allows teams to have a total equivalent power of around 16 393s. However, due to the fact that there are only eight physical motors, the V5 is much less flexible in how you can distribute motors compared to the Cortex system. As a result, the extra power gained with a V5 motor is often wasted. To add to the issue, each V5 motor takes up ⅛ of the motor limit, and adds more weight than a 393 motor.
One possible solution would be to release a smaller, weaker, and lighter V5 motor. With this, the VEX EDR motor limit could use a credit system which would allow for a total of 16 credits. Regular V5 motors could be worth two credits, while smaller V5 motors could be worth 1. This change would allow teams to have similar flexibility in motor configurations to what the Cortex system had.
*Clarification: the specific numbers listed above are just a quick suggestion to get the point across, the final decision would depend on exactly how strong the weaker motors are and the games the GDC designs.*
The V5 vision sensor is a new and exciting device that has a lot of potential for both teams and students, both inside and outside of the classroom. One concern regarding the sensor is how its performance varies in different lighting conditions. It is difficult for teachers and Event Partners to control the lighting and background of fields in classrooms and at tournaments, especially when natural lighting is a factor. Therefore, to be able to use the vision sensor to its full potential, there would have to be some standard process in place. This, alongside the fact that there is no apparent ability to write code to calibrate the vision sensor’s signatures without plugging in a computer, further complicates its use for students and makes it difficult to fathom how the vision sensor can be used consistently at tournaments.
It would also be useful to be able to get the raw camera feed from the sensor for custom video processing. Though, based on our current understanding, this would be impractical over the RS-485 bus used for communication. One alternative would be to allow for custom code to be uploaded to the vision sensor by the user to do on board processing. A simpler alternative could be to upload a custom OpenCV pipeline. This functionality would allow students to create more advanced code, serving as an invaluable learning opportunity.
Due to the lack of sensor support, freedom with motor usage, and other small issues, we do not believe that the current implementation of the V5 system a large improvement over the Cortex. For that reason, the PiLons have decided to stay with the Cortex system, at least for the beginning of this competition season.
That being said, we see a lot of potential for the V5. The addition of the vision sensor, the new smart ports, and the improved coding environments are a huge step in the right direction. Solutions to the aforementioned problems will help this system become the ideal platform for both educational and competitive robotics.