Sunday, February 17, 2013

RoboMagellan Progress Spring 2013

During the Spring 2013 intercession, progress was made to the RoboMagellan platform in many aspects. I will be outlining the improvements made in the following areas: compass, magellanclass, heading corrections, and avoid function.

The compass placement on the platform has been the most troublesome hardware component on the platform. After the decision was made to remove the large expanded PVC tail from the robot, that had previously housed the compass, a new position for the compass was needed. The compass was first placed at the front of the robot, on the new chassis that Vincent built. This position provided consistent results and allowed testing of other systems to be accomplished. The compass was then found to not respond linearly. This was discovered by comparing the compass reading from the robot, and the compass reading from a smartphone with a compass, while turning the robot through 90 degrees, each time. This showed some 90 degree turns exibiting compass reading changes as large as 135 degrees or as small as 45 degrees depending on the compass position. The compass was repositioned to multiple locations and placed on top of supports, up to approximately 12 inches in length, in an attempt to correct the discrepancies. These proved to only make the problem more pronounced. The compass was reattached to its original location and a software solution was chosen.

The following function is called from within the getHeading function of the compass class and takes 2 arrays as arguments, it returns a heading that has been corrected based on the values in the table and linear interpolation.

def correctHeading(self, rawHeading, raw, calibrated):
        #print rawHeading
        for index in range(0,len(raw)):
            #print index
            if rawHeading >= raw[index] and rawHeading <= raw[index+1]:
                headingSpacingRaw =  float(raw[index]-raw[index+1])
                #print "headingSpacingRaw",headingSpacingRaw
                #headingSpacingCalibrated  = float(calibrated[index]-calibrated[index+1]) #should be 10
                headingSpacingCalibrated = float(-10) #avoids having to deal with 350 to 0
                #print "headingSpacingCalibrated",headingSpacingCalibrated
                #find the ratio of the spacing between raw and calibrated
                correctionSlope =  headingSpacingCalibrated/headingSpacingRaw
                #print "correctionSlope",correctionSlope
                #find the difference between the testvalue and the lower Raw value
                InterpolationValue =  (rawHeading - raw[index])
                #print "InterpolationValue",InterpolationValue
                #Look up the calibrated Value and add the interpolation
                CorrectedValue = (calibrated[index] + InterpolationValue * correctionSlope)%360
                #print CorrectedValue
                return CorrectedValue


The code first finds where the value falls within the list. It then calculates the difference of each of the arrays. A ratio between the two differences is then calculated and is used as the slope of the function for linear interpolation. The difference between the heading and lower value is then multiplied by the slope and added to the lower calibrated value to determine the corrected heading. This has made the compass accurate enough to allow the robot to end within a range where the camera can discern the cone.

The magellanclass has seen evolutionary improvements to allow for new functionality and improve existing functions. The gains for the PID loop for steering were tuned to achieve stability while allowing for quick enough corrections to account for terrain changes and new headings. The PID values appear to be related to speed and if other improvements allow for greater speeds, this may need to be addressed.

During testing it was found that if the robot was started at a heading, other than the desired heading, errors were incured in the final position of the robot. This was determined to be caused by the turning radius of the robot and the distance traveled in the turn. The following function was written to eliminate these errors.

def correctInitialHeadError( self, desiredHeading, actualHeading, desiredDistance ):

        headingDifference = self.angle_fix( desiredHeading, actualHeading )
        print "headingDifference", headingDifference
        turningRadius = 4

        #if (headingDifference > 180):
        #       headingDifference = headingDifference - 360
        #if (headingDifference < -180:
        #       headingDifference = headingDifference + 360

        if headingDifference < -90:
            xError = turningRadius * (-1 + math.sin(math.radians(headingDifference + 90)))
        elif headingDifference > 90:
           xError = turningRadius * (1 + math.sin(math.radians(headingDifference - 90)))
        else:
            xError = turningRadius * math.sin(math.radians(headingDifference))
        yError = math.fabs(turningRadius * math.sin(math.radians(headingDifference)))
        print "xError", xError, "yError", yError

        actualDistance = math.hypot(xError, (desiredDistance - yError))
        print "actualDistance", actualDistance
        correctedHeading = desiredHeading - math.degrees(math.atan2(xError, actualDistance))
        if(correctedHeading > 360):
            correctedHeading = correctedHeading - 360
        actualDistance = actualDistance + (math.pi * turningRadius) * (math.fabs(headingDifference) / 180)
        print "actualDistance", actualDistance

        return float(correctedHeading), float(actualDistance)

The function calculates the difference between the initial heading and the desired heading. it then uses the turning radius to calculate the x and y error due to the turn. These errors are then used to form a triangle and the pythagorean theorem to determine a new distance. This distance is then added to the distance traveled in the turn and the heading and actual distance are returned. This function greatly improved the accuracy and precision of the robots destination.

The avoid function was almost completely rewritten during the spring semester. This allowed for better object avoidance and to improve the clarity of the code. The new code has been performing well. Over the next few months we hope to implement the ability to get out of stuck situations, account for errors in heading and distance due to object avoidance, and the integration of the horn and fine tuning of parameters.

 def avoid(self):
    ###############################################################################
    #  Tommy and Mason rewrite 2013
    #
    #  1.  Check to see if there are obstacles in each of the three sensors at 100 cm, 150 cm or 100 cm or bump switch
    #  2.  Is it in the center?
    #         a.  Backup
    #         b.  Check the left, if not obstacle, go left by turning the wheel.
    #         c.  Else, go to the right by turning the wheel
    #         d.  Turn the motors on backward for 20 ticks ( about a foot) and until the obstacle is gone
    #  3. Is there a obstacle on the left side? Turn the wheels right until it goes away
    #  4. Is there a obstacle on the right side?  Turn the wheels left until it goes away
        if( self.hasObstacle( 100 )[0] or self.hasObstacle( 150 )[1] or self.hasObstacle( 100 )[2]):  #or self.compass.getBumper( )
            ticks = self.encoder.getValidEncoder( )
            starting = self.compass.getHeading( )
            print 'Found Obstacle',self.hasObstacle( )
            if( self.hasObstacle( 150 )[1] or self.compass.getBumper( ) ):
            ticks = self.encoder.getValidEncoder( )
            starting = self.compass.getHeading( )
            print 'Found Obstacle',self.hasObstacle( )
            if( self.hasObstacle( 150 )[1] or self.compass.getBumper( ) ):
                print 'something is in front of me, I should back up'
                self.motors.drive(-10) #brake
                time.sleep(.5)
                if not self.hasObstacle( 200 )[0]:
                    print 'nothing on left, going there'
                    self.motors.turn( 100 )
                else:
                    print 'default, going there'
                    self.motors.turn( -100 )
                initialPosition = self.encoder.getValidEncoder( )
                while ((self.encoder.getValidEncoder( )[0] > initialPosition[0] - 20) or (self.hasObstacle( 150 )[1])):
                    self.motors.drive( -40 )
                self.motors.reset( )
                time.sleep( 2 ) #Temp for debug
                self.motors.turn( 0 )
                print "Turned 20 degrees"
        while (self.hasObstacle( )[0]):  #Detect on Left
                # print "Detect on Left"
                self.motors.drive(40)
                self.motors.turn(100)
        while (self.hasObstacle( )[2]):  #Detect on Right
                # print "Detect on Right"
                self.motors.drive(40)
                self.motors.turn(-100)
        self.motors.turn( 0 )
        return self.encoder.getValidEncoder( )

Sonar Object Avoidance, Chassis, Servo Camera

Sonar Object Avoidance


The array of sonars are used for object avoidance. If an object is detected solely on the left or right sensor, the robot will veer to the opposing side. If an object is detected in the middle sonar, the robot will brake (hard stop), back up, and veer around the object as well as calculate a new heading and distance based on the turning radius and arc length from compass readings. The following defines the sonar class and returns an array of sonar readings.

import threadedserial
import time

class Sonar( threadedserial.ThreadedSerial ):
    def __init__( self, port, baud, command = 0 ):
        super( Sonar, self ).__init__( 'Sonar', port, baud, command )
     
    def getValidSonar( self ):
        '''
        This breaks down the sonar values into an array
        Returns an array of obstacles
        True = obstacle exists within threshold
        False = no obstacle exists within threshold
        '''
        if self:
            pings = self.reading.split( )
            ping = [0 for i in range( len( pings ) )]
            for i in range( len( pings ) ):
                ping[len( pings ) - 1 - i] = int( pings[i] )
            return ping
        else:
            return None

'''11 , 6'''
port_sonar = 2

The following code sets the turningRadius as 4 feet. It uses the math import and trigonometric functions to calculate a new heading and distance after making the correction.


def correctInitialHeadError( self, desiredHeading, actualHeading, desiredDistance ):

        headingDifference = self.angle_fix( desiredHeading, actualHeading )
        print "headingDifference", headingDifference
        turningRadius = 4

        #if (headingDifference > 180):
        #       headingDifference = headingDifference - 360
        #if (headingDifference < -180:
        #       headingDifference = headingDifference + 360

        if headingDifference < -90:
            xError = turningRadius * (-1 + math.sin(math.radians(headingDifference + 90)))
        elif headingDifference > 90:
           xError = turningRadius * (1 + math.sin(math.radians(headingDifference - 90)))
        else:
            xError = turningRadius * math.sin(math.radians(headingDifference))
        yError = math.fabs(turningRadius * math.sin(math.radians(headingDifference)))
        print "xError", xError, "yError", yError

        actualDistance = math.hypot(xError, (desiredDistance - yError))
        print "actualDistance", actualDistance
        correctedHeading = desiredHeading - math.degrees(math.atan2(xError, actualDistance))
        if(correctedHeading > 360):
            correctedHeading = correctedHeading - 360
        actualDistance = actualDistance + (math.pi * turningRadius) * (math.fabs(headingDifference) / 180)
        print "actualDistance", actualDistance

        return float(correctedHeading), float(actualDistance)



Chassis



















The chassis was redesigned this year to lower the center of gravity and reduce tipping. In previous years, the limiting factor was the compass as magnetic fields generated by the motor would interfere with the compass readings. This year, a new compass module was acquired and proved reliable at its current mounting position at the from of the chassis just behind the bumper. Additionally, a rear bumper was installed in order to detect the possibility of backing up into the cone during the cone tracking portion of the code. The rear bumper will also be used as a start button so that the remote desktop can be disconnected prior to the competition run.


Servo Camera

Above: Camera mounted on servo














For additional functionality on the camera system, it was mounted on a servo. The servo rotates approximately 180 degrees allowing the robot to scan the surrounding area without moving. In the future, a function will track the cone and send servo values to the motor module. The test code defining panCamServo as well as the test function for sweeping the servo are below:

def panCamServo( self, amount = 0 ):
        '''
        The turn function takes a percentage and convert to appropriate turning angle
        -100. to < 0. turns left, > 0. to +100. turns right
        Any values outside of this range will place the servos back to neutral position
        '''
        if amount >= -100 and amount <=100:
            self.panServo = int( amount / 100. * self.pwm_attr[self.panServo][1] )
        elif amount < -100:
            self.panServo = -self.pwm_attr[self.panServo][1]
        elif amount > 100:
            self.panServo = self.pwm_attr[self.panServo][1]
        else:
            self.panServo = 0
        self.__send_pwm( self.panServo )

The following is the test function for moving the camera. The camera will start pointing all the way to the left of the robot and sweep to the right in increments of 5 (range of -100 to 100) every .5 seconds:


port_motor = 15
if __name__=="__main__":
    holding = 100
    motors = Maestro(port_motor)
    motors.doSetAttr( [[1460,640],[1650,656],[1496,504],[1500,500],[1500,700]] )
    while holding > -100:
        motors.panCamServo(holding)
        print "moving the camera", holding
        holding = holding - 5
        time.sleep(.5)

Wheel Encoders and Collision Detection

In order for Robo-Magellan to know where it's going, it needs to know where it's come from.

GPS, while useful, has its limitations in determining Magellan's location relative to a cone or waypoint--together, the time to acquire a connection with positioning satellites and an error factor up to three meters from absolute position make a combination of locally acquired odometry and compass heading superior for our purposes.

The previous solution for measuring change in distance comprised of an IR emitter and collector pair placed inside the robot's wheel hubs, recording alternating high and low readings as the wheels turned and the emitted IR was either absorbed (black electrical tape) or reflected (hard plastic) by the wheel. Water, dust, dirt and mechanical vibrations were culprits for error in addition to floating odometry caused be deformation of the air-filled tires across various terrains.

An overhaul of the drivetrain and sensors were required for a more reliable method of measuring distance traveled without constantly relying on GPS. The tires were first filled with dense foam and then glued to the wheel hubs to eliminate skewed encoder counts from tire deformation or wheels slipping on the hub. We then set ourselves to the task of building the new rotary encoders.

Rotary Encoder Components:
- Switching Signal (Neodymium Magnet)
- Motion (Rotary)
- Switching Device (Solidstate Hall Effect)
- Signal Processing (Arduino Nano)

Using pairs of hall effect switches and strong rare earth magnets, we developed a method of recording odometry that was less prone to interference and had much higher sensor resolution.
The strength of the magnets are important to overcome the EMI from the brushed electric drive motors and for increasing the maximum distance required to activate the switching devices.

The gear with the holes was chosen as the rotary carrier for the magnets

By gluing three magnets to face of a gear inside the platform's drivetrain transmission we achieved higher resolution than before, as the digital signal could be measured approximately 17 times per unhindered wheel revolution against the 3 times per second seen with the optical encoders.

Two hall effect switches, with colored heat shrink-covered wires, installed on transmission case

With the hall effect switches hot glued to the outside of the transmission case, side-by-side, the magnetic field from the magnets on the gear were strong enough to engage both switches at the same time, which is important for determining rotary increments and direction of rotation. These signals are then stored and processed by software to determine the magnitude of the robot's linear distance (calculated from wheel diameter and angular velocity) and cooperates with the compass heading to reach a waypoint or move the robot within optimal distance for the cone-finding routine.
Additionally, encoder odometry is used to measure how far out of an intended path the robot travels when avoiding a collision with non-cone entities.

When Magellan believes it has found a cone, it utilizes vision to align itself with the cone and "bump switches" on the front of the platform (made to look like antennae) to detect when it has successfully touched a cone. At current, the bump switches are cherry switches with flexible steel wire extensions glued to their levers and oriented such that it can detect a touch on the left or right sides of the front of the platform, with plans to integrate a third switch to detect collisions in the rear as Magellan sometimes unintentionally collides with a cone by backing into it during the cone-finding routine.

- - - - - - - - - -

First test: Single Hall Effect switch and magnet installed on gear
(signal state tied to LED via Ardduino)


/*
 Hall Effect Switch
 
 Turns on and off a light emitting diode(LED) connected to digital  
 pin 13, when Hall Effect Sensors, connected to digital pins 2 and
 3 are triggered by a magnet.
  
 This example code is in the public domain.
 
 http://www.hobbytronics.co.uk/arduino-tutorial8-hall-effect

 Modified by Thomas M. R. Thomas
*/

// constants won't change. They're used here to set pin numbers:
const int hallPin1 = 2;     // the number of the first hall effect sensor pin
const int hallPin2 = 3;     // the number of the second hall effect sensor pin
const int ledPin =  13;
const int ledPin2 = 4;// the number of the LED pin
// variables will change:
int hallState1 = 0;          // variable for reading the first hall sensor status
int hallState2 = 0;          // variable for reading the second hall sensor status

void setup() {
  // initialize the LED pin(s) as an output:
  pinMode(ledPin, OUTPUT);
  pinMode(ledPin2, OUTPUT);  
  // initialize the hall effect sensor pins as an inputs:
  pinMode(hallPin1, INPUT);     
  pinMode(hallPin2, INPUT);
}

void loop(){
  // read the state of the hall effect sensors:
  hallState1 = digitalRead(hallPin1);
  hallState2 = digitalRead(hallPin2);
  if (hallState1 == LOW) {     
    // turn LED on:    
    digitalWrite(ledPin, HIGH);  
  } 
  else { digitalWrite(ledPin, LOW);
  }
  if (hallState2 == LOW) {
      // turn LED on:    
      digitalWrite(ledPin2, HIGH);  
      }
   else {
      // turn LED off:
      digitalWrite(ledPin2, LOW); 
    }
  }

Second Test: Two Hall Effect switches, four possible signal states; increments and direction achieved
(signal states tied to LEDs via Arduino) 

#define encoder0PinA  2
#define encoder0PinB  5
#define encoder1PinA  3
#define encoder1PinB  6
 
volatile int encoder0Pos = 0;   //signed
int encoder0temppos = 0;
volatile int encoder1Pos = 0;    //signed
int encoder1temppos = 0;

char command;
int encoder1speed =0;
int encoder0speed =0;
int encoder1tempspeed =0;
int encoder0tempspeed =0;
int encoder1acc =0;
int encoder0acc =0;
int dLeft = 0;
int dRight = 0;
float davg = 0;

void setup() {
 pinMode(encoder0PinA, INPUT);
 pinMode(encoder0PinB, INPUT);
 pinMode(encoder1PinA, INPUT);
 pinMode(encoder1PinB, INPUT);
  
 attachInterrupt(0, doEncoder0, CHANGE);               // encoder pin on interrupt 0 - pin 2
 attachInterrupt(1, doEncoder1, CHANGE);                                   // encoder pin on interrupt 1 - pin 3
 Serial.begin (115200);
}
 
void loop(){
 OutputEncoders();
 delay(100);
}
 
int OutputEncoders(){   
  //

 
 encoder1speed = abs(encoder1temppos-encoder1Pos);
 encoder0speed = abs(encoder0temppos-encoder0Pos);
 encoder1acc = abs(encoder1tempspeed-encoder1speed);
 encoder0acc = abs(encoder0tempspeed-encoder0speed);
 
 if (encoder0acc <10)
 {
  encoder0temppos = encoder0Pos;
  encoder0tempspeed = encoder0speed;
 }
 else
 {
  encoder0Pos = encoder0temppos;
  encoder0speed = encoder0tempspeed;
 }

 if (encoder1acc < 10)
 {
  encoder1temppos = encoder1Pos;
  encoder1tempspeed = encoder1speed;
 }
 else
 {
  encoder1Pos = encoder1temppos;
  encoder1speed = encoder1tempspeed;
 }
 if ((encoder1acc < 5)&&(encoder0acc <5))
 {
  davg = (encoder0Pos + encoder1Pos)/2;
  //Serial.println(davg);
  if( Serial.peek( ) == '&' )
  {
   //this just gets rid of the & char
   Serial.read( );
   //command is the next char in buffer
   command = Serial.read( );
   if ( command == '1' )
   {
    Serial.print(encoder0Pos);
                                Serial.print( " " );
                                Serial.print(encoder1Pos);
                                Serial.print( " " );
                                Serial.println(davg);
   }
   else if ( command == '2' )
   {
    encoder0Pos = 0;     //signed
    encoder0temppos = 0;
    encoder1Pos = 0;      //signed
    encoder1temppos = 0;
   }
   else
    Serial.println( -10 );
  }
  //get rid of anything that's not part a command
  else if( Serial.peek( ) != -1 )
  {
   Serial.read( );
  }
 }
}
 
void doEncoder0(){
 if (digitalRead(encoder0PinA) == HIGH) {                                  // found a low-to-high on channel A, check channel B to see which way encoder is turning
   if (digitalRead(encoder0PinB) == LOW) {encoder0Pos = encoder0Pos - 1;}  // CCW
   else {encoder0Pos = encoder0Pos + 1;}                                   // CW
 }
 else {                                                                    // found a high-to-low on channel A, check channel B to see which way encoder is turning 
   if (digitalRead(encoder0PinB) == LOW) {encoder0Pos = encoder0Pos + 1;}  // CW
   else {encoder0Pos = encoder0Pos - 1;}                                   // CCW
 }
}

void doEncoder1(){
 if (digitalRead(encoder1PinA) == HIGH) {                                  // found a low-to-high on channel A, check channel B to see which way encoder is turning
   if (digitalRead(encoder1PinB) == LOW) {encoder1Pos = encoder1Pos + 1;}  // CCW
   else {encoder1Pos = encoder1Pos - 1;}                                   // CW
 }
 else {                                                                    // found a high-to-low on channel A, check channel B to see which way encoder is turning 
   if (digitalRead(encoder1PinB) == LOW) {encoder1Pos = encoder1Pos - 1;}  // CW
   else {encoder1Pos = encoder1Pos + 1;}                                   // CCW
 }
}


- - - - - - - - - -

During a practice run for a presentation at a Robotics Society of Southern California (RSSC) meeting, held CSU Long Beach campus, RoboMagellan successfully combined magnetic compass heading, odometry, collision avoidance and object swerving (via sonic sensors), visual-based cone finding and cone collision detection (bump switches) to demonstrate its current abilities.

Minor adjustment to collision-avoidance required

With more fine-tuning, RoboMagellan would go on to have an even more impressive final run for its presentation to the members of RSSC.


**UPDATE**

A member of RSSC was kind enough to upload a video of the final run to YouTube: https://www.youtube.com/watch?v=ycHwPwjI0lM

Saturday, February 16, 2013

Magellan classes diagram and progress in camera module (Feb. 16)




          During winter 2013, progress was made to the Magellan platform in camera module. As it is shown in the diagram, camera module consists of three parts: contracking2 class, rr_api class and roborealm program.




          Originally, there are three main functions in contracking2 class: Initialize_Vision_API(), which initialize the camera and roborealm program, getCOG(), which returns the center of gravity of the target, and terminate_roborealm(), which close the camera and roborealm program. The original roborealm program was made to detect the red color and within a range of threshold and size, and return the center of gravity of the detected red area.


          In some degree, it was a powerful program to detect red cones. However, the program might lead Magellan to the wrong way when it detects other red object such as a red wall, red T-shirts that people wears or a bottle with red cover.

                          
          As a result, the roborealm program was developed to detect the target according to not only colors but also shapes. In the developed roborealm program, “improve2.robo”, a module called shape match was used to detect the shape of targets.

                     

          A folder of pictures can be loaded in this module, and it is used to compare the pictures in the folder and the detected image from camera. Once the program find a target matches the loaded pictures, it would use a green rectangle frame to surround the object and return a value called “confidence”.





          “Confidence” is a value that describes the similarity between the loaded pictures and the detected object. The higher confidence value returns, the more accurate the object matches.

          In the improved contracking2 class, a new function called getConfidence() was made to get this value to help to determine if the detected object is the target cone. Correspondly, a function called Check_Cone() was made in the test Magellan class to help to find the target cones:


def getCOG_shapeMatch( ):
    try:
        c_X = float(rr.GetVariable("SHAPE_X_COORD"))
        c_Y = float(rr.GetVariable("SHAPE_Y_COORD"))
        c_size = float(rr.GetVariable("SHAPE_SIZE"))
    except:
        c_X = 9999
        c_Y = 9999
        c_size = 0
    return [c_X, c_Y, c_size]

def getConfidence():
    try:
        confidence = float(rr.GetVariable("SHAPE_CONFIDENCE"))
    except:
        confidence = 999
    return confidence

getCOG_shapeMatch( ) and  getConfidence() functions were added to conetracking2 class to get the center of gravity, size and the confidence value from roborealm. Their test code was to identity the target cones, and return the center of gravity when the confidence is more than 80:


if __name__=="__main__":
    Initialize_Vision_API()
    c = getConfidence()
    while c< 80 or c>100:
        c = getConfidence()
        if c>80 and c< 100:
            print "target found"
            print getCOG( )


The two functions below were made in Magellanclass module to identity cones from other objects by continually getting confidence value from roborealm. The function Confidence_Filter() could increase the accuracy of cone searching.



 def Confidence_Filter():
        coneFound = False
        for i in range(10):
            c = getConfidence()
            if(c>80 and c<100):
                coneFound = True
                break
        return coneFound           
        
  
  def Check_Cone():
        [cone_X,cone_Y,cone_size] = getCOG()
        cf = False
        if ((cone_X!=9999) and Confidence_Filter()):
            cf = True
        return cf

          Also, a module in roborealm called “mean filter” is used to better detect the objects in a further distance or in dark light.


before mean filter


after mean filter

          Finally, pictures of cones are used to match the detected objects, and red objects with different shape such as a bottle of red cover and an orange drill were used to test the program. It returned a very high confidence value, more than 80, to detect the cone from other shapes; It showed a very good performance to find the cone within 3-4 meters.


                                                           


(more upgraded functions and classes for camera module: 
Dropbox\Robotics Team\Engr 99 Robotics\Magellan-James\test)