the league is here: https://tobe.homelinux.net/robocode/ -- Dummy
Went down while I was away. Just my typical luck :-) Should be OK now -- tobe
When should we expect the next one? -- Kuuran
Well, today, I gather, the last traditional MiniBot challenge is being run (20030802), and in two weeks, tobe is planning a final show-down between the top ten bots of each category. Now, one part of me is thinking, "I'm glad I got FloodMicro updated so he could likely be a part of this 'historic' final battle," but the other side of me is reminding myself how I wouldn't be making top-10 megabots if I didn't have the 'focus' inherent in MiniBot programming for this contest. I'm considering taking over this tournament, cleaning out bots that the authors don't care about so much anymore, and possibly using a more ER-style scoring system (but still in a tournament mode). Or maybe I'd just keep the same rules and use his same software to run the tournaments. What do people think? It's sad to see tobe's challenge end, in my opinion. -- Kawigi
I don't think ER scoring system is suitable for a tournament. For a tournament you need clear and easy rules, but ER scoring system is obscure (even if probably gets more accurate results). So why not using the existing software and just clearing the "old" bots? -- Albert
By ER scoring system I was also thinking about using % of total score rather than the difference in score. But the difference in score isn't bad (if you used raw score, NanoStalker would probably win every one :-p) -- Kawigi
% of score or difference in score is good (the Robocode Outpost method of total score may not reflect on how good a bot is - only on getting a high score - perhaps by staying close to all opponents). -- Paul Evans
Could you elaborate some more on this? What do you mean with difference in score? -- PEZ
I believe that the way the current MiniBot Challenge works is that it takes the scores of both contenders, and assigns something like (winnerscore-loserscore)/2+1 to the winner and -(winnerscore-loserscore)/2 to the loser. It then takes the ordered ranking of the bots after that season, and pairs them up with the closest robot to them in rank that they haven't fought yet (or something like that). That prevents robots who are overly aggressive and get 4000 points against every opponent but lose every battle from getting the really high scores in that competition (where more aggressive robots benefit in the RO league - look at DevilFISH, or even FloodMini). Of course, the winning robot will usually get really easy draws at least in the first round, and possibly in the first several rounds. If it gets the toughest opponents right off, even if it still wins by a small margin, it might not be able to catch up. I notice this especially when FloodMini goes undefeated for entire runnings of the MiniBot challenge, even beating the eventual winner, and gets something like 4th or 5th :-p.
I've considered the value of making the score rather something based on % of total score and Paul Evans's rating system that is used on the ER, where doing slightly better against a really good bot is just as good as kicking the living trash out of a really poor bot. Every bot's score would be re-evaluated relative to the new scores of each bot they faced in the past. The problem is in what order? And how do I get a basis to start with? If I just start everyone at 0 (or 1500, whatever you like) and don't re-adjust scores when it turns out that a former opponent was actually really strong or something, I don't really 'fix' the luck of the draw system, but I do reduce the chance for a comeback if a bad bot does extremely well against another bad bot, which is another thing that can happen in this format, whether or not it's a negative thing. And the other question is whether I want to 'fix' the luck-of-the-draw system, or just let it slide. If I could find a good way to do it, though, I'd like for slightly more consistent results to come out of it. -- Kawigi