(redirected from Bee)

[Home]CassiusClay/Bee

Robo Home | CassiusClay | Changes | Preferences | AllPages

The gun used in CassiusClay. Pluggable as shown by BeeRRGC. -- PEZ

TargetingChallenge results

This gun is, as of Aug 7 2004, the strongest in the TC500. Here's some saved results for me keeping track of the progress (I also include some of the competition for quick reference):

Name AuthorDTAspTAOWSparCigTronFhqwHTTCYngwFunkTotalComment
CassiusClay 1.9.6.10 PEZ 88.08 96.91 99.99 98.93 86.89 90.04 97.38 90.21 98.56 91.08 93.81 Double buffered w/ AdvancingVelocity sign segmentation
CassiusClay 1.9.5.14 PEZ 88.03 95.42 99.78 99.11 91.01 90.56 97.23 87.65 97.95 90.59 93.73 Tripple buffered w/ AdvancingVelocity sign segmentation
CassiusClay 1.9.6.13 PEZ 85.91 95.76 99.86 99.14 89.66 89.21 97.85 90.02 98.29 89.39 93.51 Double buffered w/o AdvancingVelocity sign segmentation
Raiko 0.43 Jamougha 86.02 94.64 99.63 99.05 84.06 87.78 96.42 88.47 98.60 90.04 92.47
SandboxDT 2.71 hisegPaul Evans 80.63 94.70 99.98 99.32 84.38 90.19 95.65 89.67 98.54 88.44 92.15 With hiseg=true in DT.properties file
Shadow 3.06 ABC 80.92 91.41 99.99 98.93 84.82 91.44 96.05 89.18 97.75 89.37 91.99
FloodMini no data Kawigi 79.86 93.05 100.00 97.71 76.98 89.49 96.82 88.15 98.43 87.89 90.84 [Details]


Comments / Questions / Whatever

I know what double and triple buffering are in graphics, but what do you mean by them in robocode? --David Alves

Probably not what it means in graphics. =) Bee collects data into several stat buffers with different segmentations. These buffers are then weighed together when it's time to decide where to fire. The current version of CC uses three buffers. It's my cheap way of doing DynamicSegmentation?. -- PEZ


I may have spotted a small bug in Bee that could improve your score in the first one or two rounds. I came across it when implementing data saving/loading for BeeWT. It's about this function:

   int mostVisited() {
        double visitsFull[] = factorsFull[bulletPowerIndex][distanceIndex][velocityIndex][lastVelocityIndex][velocityChangedIndex][wallIndex];
        double visitsShort[] = factorsShort[bulletPowerIndex][distanceIndexShort][velocityIndex][lastVelocityIndex][velocityChangedIndexShort][wallIndexShort];
        double visitsRH[] = factorsRH[bulletPowerIndex][distanceIndexShort][velocityIndex][lastVelocityIndex][velocityChangedIndexShort][wallIndexShort][rhIndex];
        int mostVisitedIndex = MIDDLE_FACTOR;
        double most = visitsFull[MIDDLE_FACTOR] / visitsFull[0] + visitsShort[MIDDLE_FACTOR] / visitsShort[0];
        for (int i = 1; i < FACTORS; i++) {
            double visits = visitsFull[i] / visitsFull[0] + visitsShort[i] / visitsShort[0];
            if (visits > most) {
                mostVisitedIndex = i;
                most = visits;
            }
        }
        return mostVisitedIndex;
    }

The bug is that if the node in one of the two arrays is empty, you will always fire head on, even if the other is not empty. That is because 'visits' will result in a NaN caused by a division by zero. Especially in the case where you don't have data in the Full array node , but you do in the Short array node (which is there for fast learning, right?) you will still not return the Short array's best index, but you will return MIDDLE_FACTOR instead. --Vic

Thanks! I hate that I can't tell Java to bail when there is a NaN involved in math operations... But that bug is going away at least. Good thing that Bee caught your interest Vic. =) -- PEZ

Have you already fixed it? If not, I have a patch if you are interested. --Vic

No, haven't looked at the fix yet. Please post the patch. -- PEZ

    int mostVisited() {
        double visitsFull[] = factorsFull[bulletPowerIndex][distanceIndex][velocityIndex][lastVelocityIndex][velocityChangedIndex][wallIndex];
        double visitsShort[] = factorsShort[bulletPowerIndex][distanceIndexShort][velocityIndex][lastVelocityIndex][velocityChangedIndexShort][wallIndexShort];
        double visitsRH[] = factorsRH[bulletPowerIndex][distanceIndexShort][velocityIndex][lastVelocityIndex][velocityChangedIndexShort][wallIndexShort][rhIndex];
        int mostVisitedIndex = MIDDLE_FACTOR;
        if(visitsShort[0]==0)
        {
            double most = visitsFull[MIDDLE_FACTOR] / visitsFull[0];
            for (int i = 1; i < FACTORS; i++) {
                double visits = visitsFull[i] / visitsFull[0];
                if (visits > most) {
                    mostVisitedIndex = i;
                    most = visits;
                }
            }
        }
        else 
        {
            if(visitsFull[0]==0)
            {
                double most = visitsShort[MIDDLE_FACTOR] / visitsShort[0];
                for (int i = 1; i < FACTORS; i++) {
                    double visits = visitsShort[i] / visitsShort[0];
                    if (visits > most) {
                        mostVisitedIndex = i;
                        most = visits;
                    }
                }
            }
            else
            {
                double most = visitsFull[MIDDLE_FACTOR] / visitsFull[0] + visitsShort[MIDDLE_FACTOR] / visitsShort[0];
                for (int i = 1; i < FACTORS; i++) {
                    double visits = visitsFull[i] / visitsFull[0] + visitsShort[i] / visitsShort[0];
                    if (visits > most) {
                        mostVisitedIndex = i;
                        most = visits;
                    }
                }
            }
        }
        return mostVisitedIndex;
    }

It's not pretty, but it works.

Another thing, why do you give the Short and Full arrays equal weights? I had expected the Full array to carry more weight because the Short one is there only for fast learning purposes. Is this done on purpose?

--Vic

Thanks for the patch. I can work from there to get it more maintainable.

Yes, it's on purpose I weigh them equally. My theory behind this setup is that it is the buffer that's most certain about its guess that should rule. Let's say I have two nodes A and B for a given situation. A has a clear spike while B is more flat. Then A's stats should tip the guess to its side, regardless if A's stats are the results of less segmentation than B. In practice though it is probably more common that the more segmented buffer will show more spikes (that's why we segment in the first place) and the more segmented buffer will most often rule. Only theory, but it seems to work, doesn't it? =

-- PEZ

Very nice! I can see how that works. --Vic

You must have an old version of Bee to work from Vic. My current version (since many, many CC's back) suffers the same NaN problem though. But since it uses three buffers your patch would make it really hard to maintain I think. So I've done it like so instead:

    int mostVisited() {
	double visitsFaster[] = factorsFaster[bulletPowerIndex][distanceIndexFaster][velocityIndex][lastVelocityIndex][velocityChangedIndexFaster][wallIndexFaster];
	double visitsMedium[] = factorsMedium[bulletPowerIndex][distanceIndex][velocityIndex][lastVelocityIndex][velocityChangedIndex][wallIndex];
	double visitsSlower[] = factorsSlower[bulletPowerIndex][distanceIndex][velocityIndex][lastVelocityIndex][velocityChangedIndex][wallIndex][approachSignIndex];
	int mostVisitedIndex = MIDDLE_FACTOR;
	double most = 0;
	for (int i = 1; i < FACTORS; i++) {
	    double visits = visitsFaster[i] / (visitsFaster[0] + 1) + visitsMedium[i] / (visitsMedium[0] + 1) + (Bee.roundNum > 30 ? visitsSlower[i] / (visitsSlower[0] + 1): 0.0);
	    if (visits > most) {
		mostVisitedIndex = i;
		most = visits;
	    }
	}
	return mostVisitedIndex;
    }
That should work I hope?

-- PEZ

Jim pointed out that the above fix introduces a new problem with bias early on. He suggested this one instead:

	    double visits = visitsFaster[i] / Math.max(1, visitsFaster[0]) + visitsMedium[i] / Math.max(1, visitsMedium[0]) + (Bee.roundNum > 30 ? visitsSlower[i] / Math.max(1, visitsSlower[0]): 0.0);
Which is what I know use. Thanks Jim! Love ya!

-- PEZ

Yes, I used BeeRRGC 1.0 because iirc it scored the most points in the RRGC. --Vic

Maybe it did. But it was only one or two pionts margi and that's not significant with the margin of error I think. Besides, the current version should benefit even more from a SuperNode focus. Though maybe its data files would outgrow even that amazing compression scheme. -- PEZ

At least we can see now if this scheme actually works or not. If it does, and rating improves significantly, I'm sure you will want to build it into CC. Right? I'll assist you with that if you want. By the way, why do you think the current Bee benefits more? --Vic

Plugging in the BeeWT gun as it is should bevery straight forward i guess. I will definately try it once it has data on the right selection of RR bots. -- PEZ

Forgot to answer this one: I think current Bee would benefit more from saved data since it has yet another level of segmentation that probably takes a while to kick in. Since CC can't afford to save gun data on even one enemy the possible added qualities of this extra buffer doesn't show in the RRGunChallenge. So you might have used the wrong criteria to chose Bee version. =) -- PEZ

Bee scores far above all other GF guns in the TargetingChallenge and TargetingChallenge/FastLearning. Yet it is very similar in design and execution to many other GF guns. What do you think gives Bee the edge? --David Alves

In TC35 I would say Shadow's gun is as strong as Bee is. The difference in score could be within the margin of error. Even so, your question is valid. One thing I know makes the gun strong is the combination of slow and faster stat buffers. The Pugilist gun is identical in all other respects. But P's gun is strong too so we can dig a bit further. I think the wall segmentation is good. (I extrapolate the enemy's lateral movement in the future and see how soon it would hit the wall if it change path.) If my memory is correct I have 4 wall segment indexes. A thing making my guns unique is that I segment on velocity and lastVelocity instead of velocity and acceleration. This makes my gun able to sort out accel/deccel itself and also "see" the special case if the enemy's velocity goes from something high to something low very fast (which means they have collided with something). Also my velocity segmentation, using indexes built on abs(velocity / 3), puts MAX_VELOCITY in a bin of its own. Further I've tried hard to make my time_since_velocity_change segmentation produce an even distribution of indexes over the range of bullet_travel_times I think is important. Other than that... The gun is free of fuzzy stuff like smoothing and bot_width consideration and in_field_translations and what have you. I use raw visit counts and trust my heavy segmenatation to sort out the stray cases. Hmm.... and I segment on bullet_power and distance to make sure I don't contaminate my stats with my energy management. I think I might be the one robocoder who have worked the hardest with making my gun perform so very few aspects of my gun has been left without challenging tests. I have tweaked every segmentation down to the decimal.

That said, be careful with letting TC performance guide you too much. The gun in the lead of the TC score tables is from CC 1.9.6.10. Equipping the current CC with that gun makes it lose 15 points in the RR@H. And the current Bee scores .5 points lower in TC500. You have been warned. =)

-- PEZ

15-season FastLearning? results are only accurate to +- 1 point in my experience. I use 50-season results for tweaking my gun. I haven't done a similar test for the normal TargetingChallenge, but I would expect that to get stable results you should average at least 3 seasons. So scoring .5 lower is probably insignificant.

--David Alves

I know, but that's not what I'm talking about. In this case the guns have a definitive difference in TC performance. But the winner there loses by a pretty large margin in the RR@H. To really decide about gun tweaks it is best to test it in the RRGunChallenge. -- PEZ

Hey PEZ, I've got a quick question about Bee. In an effort to evaluate how good my gun is outside of the segmentations, I tried just implementing the segmentations from the Bee gun. As I expected, the results were lower than yours but higher than mine in 500 rounds vs the TC2K6 non-surfers. Right now, without my AntiSurfing? gun, I get 91.66, Bee gets over 94, and my gun with Bee segments gets 92.6 (over 2 seasons). So I took another look at your Bee code to see how you did things...

    int mostVisited(GunWave w) {
	int defaultIndex = GunWave.MIDDLE_BIN + GunWave.MIDDLE_BIN / 4;
	double uses = 0;
	double[][] buffers = buffers(w);
	for (int b = 0; b < buffers.length; b++) {
	    uses += buffers[b][0];
	}
	if (uses < 1) {
	    return defaultIndex;
	}
	List visitRanks = new ArrayList();
	for (int i = 1; i < GunWave.BINS; i++) {
	    double visits = 0;
	    for (int b = 0; b < buffers.length; b++) {
		visits += uses * buffers[b][i] / Math.max(1, buffers[b][0]);
	    }
	    visitRanks.add(new VisitsIndex(visits, i));
	}
	Collections.sort(visitRanks);
	return ((VisitsIndex)visitRanks.get(0)).index;
    }

I understand this code, I just have one simple question - what purpose does it serve to multiply every addition to "visits" by "uses"? If "uses" is constant throughout this whole loop, and all you care about is which bin is the highest relative to the others, couldn't you just omit that multiplication? I'm just trying to figure out if I am misunderstanding this here, which is of course very possible =) -- Voidious

If you think that's some heavy smoothing you should know I have 107 bins. You do realize that you are still smoothing WAY more than using pow(x, 2) (old Bee) with 47 bins, right? As in dividing edge bins on a GF0 hit by ~7 instead of by ~600? If you did want the same smoothing, you'd be best off multiplying by (107/47) and using the same exponent, I reckon. -- Voidious

Smoothing. I have found that it doesn't matter too much how much I smooth as long as I do smooth.

The reason I multiply by uses up there is that I want the result to be > 1. This isn't a fool proof way to unsure that, but I compensate for it again in the compareTo() of the VisitIndex? class.

-- PEZ

I noticed something here that seems very odd, but maybe you can explain it:

    private static double[][][][][][] faster = new double[...][GunWave.BINS];
    // ....
    for (int i = 1; i < GunWave.BINS; i++) {

Ok, that loop there is actually iterating (BINS - 1) times; it should be as if you have no GF1 or no GF-1, isn't it? Since you use the 0 index for something else, shouldn't you have (GunWave?.BINS+1) as the size of the last array dimension in order to have "BINS" bins? Or do you just assume nobody will ever hit GF-1? (It looks like that's the side that's cut off in visitingIndex().) Sorry, this is a very trivial thing, I'm just curious =) -- Voidious

Since Bee's calculation of GF-1 is simplified and disreegards the time it takes to deccel and accel again in the other direction GF-1 is in fact unreachable. So it is pretty safe to use it for keeping that usage-count it now does. -- PEZ

I noticed you are using predicted next-positions to calculate the startBearing on your gun waves. I was doing this, too, but I realized how silly it is when I could just wait until the next tick, get the actual positions from that next scan, and create the same wave the next tick. Anyway, just thought I'd mention it, every little bit counts when you're so close to a 90 TC2K6 score. =) -- Voidious

First, let me note that I am quite sick, and might be very out of it! There's one more thing that seems slightly off to me in Bee (lamda). If (setFireBullet != null), you set the weight of the wave you're creating to 5; however, that bullet is fired from this tick's source, using this tick's predicted bearing, and it was aimed last tick. Shouldn't you set weight = 5 on the previous wave? (Please know that I mean no disrespect - the only reason I'm noticing these things is that I'm dealing with the same issues in Dookious, and I happen to know Bee well enough to realize that it might be off.) -- Voidious

Actually I have not much of a clue. This whole next/previous tick thingy confuses me like hell. It's only after heavy intakes of Burboun (Maker's Mark if I can get hold of it) that I try to understand it. Feel free to experiment with Bee and send me the patches. =) -- PEZ

Hmm, maybe I will toy with it... Then again, I'm a lot less inclined to work on CC when I'm only a few points away from him in rating. =) -- Voidious

Well this is unexpected...

1st: davidalves.Phoenix 0.30.1	9291	3500	700	4384	707	0	0	70	30	0
2nd: wiki.PhoenixBee	        5182	1500	300	3091	291	0	0	31	70	0
--David Alves

And just a bit frightening, unless your gun is highly optimized against your own movement ... it would be cool to see a new version of PhoenixRRGC?. Congrats on reclaiming your spot in the top 10, by the way. Also, Dookious's gun is pluggable just like CC, so feel free to try that, too, if you want. =) -- Voidious

Just to make sure I did this right, here's what I did to integrate Bee:

//new field variables:
static RobotPredictor beePredictor = new RobotPredictor();
static Bee bee;

//first line of run()
public void run() {
    if(bee == null) bee = new Bee(this, beePredictor);
    ...

//last line of onScannedRobot(ScannedRobotEvent e)
bee.onScannedRobot(e);

//lastly, onBulletHit
bee.onBulletHit(e);

That's all I need to add, right? --David Alves

I'm not sure if what you have there is wrong, but CassiusClay itself does some things a little differently...

Bee stinger;
RobotPredictor robotPredictor = new RobotPredictor();

public void run() {
    stinger = new Bee(this, robotPredictor);
    ...
}

public void onScannedRobot(ScannedRobotEvent e) {
     stinger.onScannedRobot(e);
}

public void onBulletHit(BulletHitEvent e) {
    stinger.onBulletHit(e);
}

public void onWin(WinEvent e) {
    stinger.roundOver();
}

public void onDeath(DeathEvent e) {
    stinger.roundOver();
}

public void setTurnRightRadians(double turn) {
    super.setTurnRightRadians(turn);
    robotPredictor.setTurnRightRadians(turn);
}

public void setAhead(double d) {
    super.setAhead(d);
    robotPredictor.setAhead(d);
}

public void setMaxVelocity(double v) {
    super.setMaxVelocity(v);
    robotPredictor.setMaxVelocity(v);
}

The RobotPredictor? stuff definitely needs to be there for the predictor to work correctly, but I don't know how much that would hurt the gun. As far as initializing each round and calling roundOver(), I can't remember if those would affect the functionality of the gun or not. Hope that helps.

-- Voidious

Just out of curiousity, which version of CC (read Bee) should be used for a AnyRobotBee? version, just the latest or is there a 'standard' version to use. -- GrubbmGait

I always used the latest version. -- Voidious


Robo Home | CassiusClay | Changes | Preferences | AllPages
Edit text of this page | View other revisions
Last edited July 4, 2006 15:25 EST by Voidious (diff)
Search: