I had a few options for programming Ziptie: LabVIEW, C++, and Java. Java seemed perfect given that I had recently studied Java and passed the Computer Science A exam. Programming FIRST robots with Java is done with WPILib, a collection of code and drivers that acts as a framework. WPILib handled managing the robot’s motors and sensors, providing easy access to their data. Thus the emphasis of most of the programming was on the robot’s algorithm, i.e. how it handled the data from its sensors and controls to perform successfully in the competition matches.
After the build team assembled the basic robot chassis and drive system at a “quick build” session, fellow programmer Jeremy and I took the default sample code from the WPILib library and customized it to fit our robot’s specifications. We focused on starting with a clear, simple and stable base system of control that could later be expanded. Over the course of the six-week build process, we experimented with different control schemes. Eventually we settled on an arcade-drive configuration with a separate joystick for controlling the robot’s arm. Our physical driver station consisted of the drive joystick on the left, our laptop in the center, and the arm joystick on the right. The drive joystick controlled robot movement, the laptop controlled communications and allowed us to adjust variables and see sensor data from the robot, and the arm joystick allowed us to control the arm, camera, and claw.
Each match of LOGO MOTION™ had a 15-second autonomous period preceding the 2-minute teleoperated period, and it was only during this period that “ubertubes” could be hung. Robots could not be manually controlled in the autonomous period; instead, they could accept initial inputs from the drivers set before the match and could use their sensors to follow colored tape on the ground and their cameras to coordinate hanging the tubes. The kit of parts included a gyroscope, accelerometer, and multiple encoders, which were other valuable sensors that could be used to aid in a “dead reckoning” approach to the autonomous period.
Jeremy and I decided to tackle the challenge of autonomous functionality initially by using the line sensors. We decided that camera image processing, while made somewhat easy by WPILib, would not be stable enough to justify its time costs. We ended up using three line sensors to follow colored tape, but we found that the update interval of the sensors was not fast enough to give them high enough accuracy at high speeds. With only 15 seconds to hang the tube, we decided our best option was to use the gyroscope and line sensors in conjunction to travel in a straight line to the scoring grid at the start of the game. Our drive motor encoder allowed us to have a sense of how closer we were to the scoring grid and to adjust our speed as necessary. Hanging the tube was done using the arm motor encoder. By counting the shaft rotations of the arm’s sprocket shaft, we could detect the angle of the arm and determine when it was at the right height to hang the ubertube.
Much of Jeremy’s time and my time was spent adding fine-tuned control functionality to the robot.