Street fighter AI competition

I just recently found a very interesting site. Some people implemented super mario in Java, and then held a competition for who could write the best mario AI:

A great example of one of the top AIs is here:

This AI follows the mouse on screen, while keeping mario alive.

This website and video got me thinking: Wouldn’t it be great if we could have such a competition for street fighter? It would be really interesting to see who can write the best Ken AI, or the best Zangief AI. Can players beat the best Ryu AI? What happens when the best Ryu AI plays the best Chun Li AI? I think this would be fun and interesting. So I thought I’d bring this up here since we have some talented devs (cannons, damdai, others) that might have ideas on how to get started.

Off the top of my head, the best place to get started would be the ST rom. We’d want to expose an interface with game state. At a bare minimum: health of each player, meter of each player, and time. Ideally we’d also get: players attack hitboxes, players vulernable hitboxes, and states like is_airborne, is_blockstun, etc…

How could we go about exposing these interfaces? Perhaps player health, player meter, and game time is always stored in the same location in memory and that can be scanned?

Any ideas on how to go about this? Thought? Comment? Suggestions?

^^ I would love to see the AI Mario in the next to impossible Mario level designed my those crazy enthusiasts.

Step 0 is putting something into place that will let people actually write the AIs. You’d be better off trying for ST in MAME, or some kind of Dreamcast emulator.

Beyond that it depends on the restrictions you put on the AI. At some reaction speed, timing, and spacing ability the game gets degenerate thanks to option select type stuff and inescapable tick loops. Without artificial restrictions, AI would slaughter human players.

Right, my post was about starting with this “step 0”. I’m specifically asking about how to read game state from the ROM (perhaps this wasn’t clear in my original post) and expose an API.

I’m actually very interested in the degenerate game. I’d love to have a battle of AIs to see who wins. AI Hawk? AI Ken? I think it would be very interesting.

There was a really bad fighting game waaaay back on the Sega CD where you could do something like this called Black Hole Assaultl, where you basically fought with robots/mechs. You could actually generate an AI for your character by programming what the character would do at certain distances and how frequently it would do things and other such stuff. I only played it a total of one night, but it was entertaining enough to pit different A.I.s against each other and adjust them as you went along.

The problem with such an AI contest for SF is if people get too meticulous. I mean, if I took the time, I literally could program a Ken that walked forward and, if I took the time to do so, programmed in EVERY MOVE IN THE GAME into the AI so that when the enemy does a certain move, the AI checks to see if it is in range to hit that move and DP, if not, keep walking forward until within Throw Range and Throw. Hahaha.

You COULD beat that with programs that Kara Canceled moves, but who would think of that?

So if we made it so that we limited exactly what each person COULD tweak like that old Sega CD game, it might be kinda interesting.

  • James

Virtua Fighter 4 Vanilla had a full A.I. training mode. You could teach the A.I. everything, from combos, to flowcharts, to defensive techniques. They took that mode out for VF4:EVO and VF5 but the version from original VF4 was pretty well done. No other AAA title really has anything comparable to that.

Virtua Fighter’s Quest Mode, where it takes inputs from top players, and then builds an A.I. player from those inputs is the best way to simulate a realistic CPU opponent. Tekken’s Ghost Mode is not that far behind because it also takes raw player inputs and then compiles them into an A.I. that you can even download and share with others. Just go online with T6BR and you can download top player’s ghosts and try to fight them offline (though it’s random which ghost you’ll face offline so you can’t directly challenge a high level ghost). The difference is that with VF, Sega took those raw inputs and refined them, T6’s Ghost Data is raw inputs so you might be getting a Ghost with lag tactics or tons of execution mistakes.

So when you face VF4:EVO (PS2) or VF5: Online (X360) A.I. you’re not only facing A.I. that’s based off of real human inputs…but AM2 ironed out any errors and inputs, and then programmed the A.I. to do other things on top of that. And the A.I. in VF in Quest Mode does not read inputs…it plays blindly just like a human opponent would. There are players at VFDC, some of the better ones even, who have played thousands of Quest Mode matches in VF because there is so much to learn.

Imagine trying to learn high or even mid level SF by playing the arcade A.I…

So an A.I. in VF will know almost every character specific combo. They’ll know which throw escapes to use against certain characters. And they’ll mix up between attack/throw or high/mid/low at the same rates as the person that the A.I. was based off of. Incredible.

Imagine if HDR, or a console port of ST, had a mode where you could face off against an A.I. nearly on Kusumondo’s or YuuVega’s level because the A.I. was using real inputs from those players…it would be pretty sick.

Lastly, if anyone is capable of making realistic SF A.I. (other than Sega of course), it is the guys who do TAS videos. Saturn’s [media=youtube]d1XHhg5RhFE"[/media] of SF2:NC is particularly amusing. These guys are mostly just fooling around with inputs for fun. If Capcom actually had initiative and paid people like this to model A.I. around human inputs then we could have robust single player modes beyond just ‘arcade’ or ‘survival’ or ‘options menu’.

Unfortunately the new rage in single player modes seems to be the same standard crap or even worse like T6BR’s Scenario Campaign.

As Rufus and James point out, it might get degenerate since the computer is able to do things at the frame level with no reaction time required. An O.Ken that always does jab DP at the perfect time to stuff any attack would be silly.

Another interesting concept is, like thethe mario mouse cursor AI, have a street fighter mouse cursor AI. You fight against an opponent only by controlling where you put the cursor. The AI decides how to get to that position in the most effective way (depending on what the other player does).

You could have 2 people fight it out just by controlling the cursor!

Not a problem.

Set the parameters such that the computer cannot detect an opponent’s move until a certain number of frames into it.
Replicate human reaction time all you need.

Likewise replicate human inability to charge off the first frame of a jump (jumps don’t charge for the first x frames), execute throws on the first throwable frame (throws execute randomly within a window of frames [say 1-5 frames], etc.

A whole aspect of this project (in order to make it actually pertinent to human play) would be to discern and apply “human limitations” to the AI’s set-up.

If done well, this could become THE best trainer for playing the game, showing you the best methods to play the game.
You could even have it set so that players with better/worse dexterity/twitch/etc. could scroll bar the ability level of the AI’s different “skills” in order to figure out for themselves what the best strat to use is. (given their own limitations)

This would be pretty interesting, but unfortunately we don’t actually have the source code to Street Fighter 2, so we can’t actually do it.

Even with artificial limitations on reaction time imposed, writing the AI still boils down to an NP-complete optimization problem. They just become another factor in the equation. You aren’t going to get into higher level types of decisions that a human would make in a real game: the best AI will still come down to using frame data to min/max the best move to make at any given time.

But with that said, I still think it would be very interesting and would love to take part in it. I’ve never worked directly with emulators before but it seems the first step would be to dig into mame’s (or any emu’s) source and figure out how to do a couple things:
-pause/save/load current game states
-redirect/feed input directly into the game
-read memory locations from the emulated game and where the important memory is in SF

The first two are already done in Mame so its just a matter of finding the source code and coming up with a way to do it programmaticly yourself. Mapping the memory locations would be a pain but someone else has probably already done it. From there its a matter of creating a programming framework/interface for AI’s to work within, and then finally start working on some AI.

Sounds fun!

That’s what disassemblers and RAM comparisons are for. If you could figure out the RAM locations for storing what frame a given character is on and what position they are at, that should be enough for people to make AIs.

EDIT: OK, I guess health numbers, which character each player is playing, how far the screen is scrolled, and super meter would be needed, as well.

Somewhere out there a guy worked out how to do hitboxes for 3rd Strike. If you’re serious, then he’s probably a good person to contact.

Yep, Ratman and Cauldrath are doing a good job of explaining what I mean. We don’t need the source code, Thelo – all that is technically needed is a to view memory in MAME and read that memory to be able to get the important game states. And as Rufus points out I think even hitboxes can be found (if it was found in 3rd strike it can be done for ST).

This is not my forte (I’ve never done anything like this). I’m hoping someone on this board might have experience doing this. If we can figure out the basics then writing a framework to plug into MAME would be doable. I’d love to be able to write an AI and have players test against it. Anyone with the knowledge able to help with this first step (disassembly)?

If AI can track success of outcomes and track multiple flowchart move trees (including many multiple inferior paths), then it can be set-up to play with “best” and inferior paths (forced into itself to be mixed in) to determine what nets the best overall results of wins.

Then another AI can be set-up to stand as a counter-point with the same set-up.

The AI ends up compiling a portfolio of flowchart movesets that have their ultimate rating based on success.
Success could be calculated on any number of values: position relative to opponent, life bar, dizzy, survival time, etc.

You could also set-up different scenarios from which the AI starts at:

  1. 1 with full life, 1 with 10% life left
  2. both with Super Bar
  3. 10 seconds on clock
  4. 1 can’t perform combos/negative edge/etc.
  5. etc.

That sounds like a neural network. Basically a neural network gives a certain weight to a number of options, given certain inputs, and the weights are altered based on how suitable the output was.

That sounds a little too advanced for this project, at least for the moment. I think a simple rule based approach, similar to the existing AI except tweaked for our purposes, would be a good start. That said, the pressing issue is getting the framework created – having some interface to manipulate the game state would be the necessary groundwork before we get to the actual fun stuff of writing AI’s.

Sounds like we need a CPU with a neuro-net processor - a learning computer.

Cyberdine baby!

How about a neural fuzzy network that uses Markov chained genetic algorithms to heuristically min-max traverse the Bayes A* domain space?