Merrak's Isometric Adventures -- Artificial Intelligence!

merrak

  • *
  • Posts: 2006
Merrak's Artificial Intelligence Adventures. I suppose the thread title is misnamed, but that's okay   :D I now have simple AI working.


So what's "simple"? The Onyx Golem pretty much behaves like one would expect. When it sees the player character, it walks toward her. When it's close, it attacks. Pathfinding is implemented, but not utilized because the golem is only awake when it's in direct line of sight of the player.

Nothing too special, but it does demonstrate the framework that powers this simple AI is functional. So now I have the green light to start digging deeper.

The best part about the design is that, although I chose to use typed Haxe code, everything designed so far could be done using code blocks and custom events. In fact, custom events and behaviors may be a more natural way to implement the design... so I'll outline it if anyone wants to play around with the idea. Here's how it works.

Central to the AI design is the function "think". The "think" function is a member of the class "AIHandler" which is what is referred to as the NPC's "brain". A call to the function brain.think( ), as illustrated in my AI opening post at the top of Page 25, performs one thinking iteration. This is one iteration:


So you can see three things happen. First, a "thought" is an instance of the class AITask, which contains some data relevant to what the thought is about. If I want the NPC to think about a thought, I pass it as an argument to the think( ) function. But that thought is not automatically parsed. Instead, it is inserted into a queue.

The next thing that happens is that the thought queue is sorted by priority. This is effectively my "scorer", borrowing language from the Utility AI model (see my previous post). Every instance of AITask includes a scoring function that will assign a priority to that thought. For example, one thought might be to "retreat and heal". That thought would have a higher priority if the NPC were low on HPs. But then again... maybe the player is even lower on HPs. Then "continue to fight" might be assigned an even higher priority. So I'm spared having to build and manage a complex behavior tree--although this does come with a cost. I expect I'll need to spend a lot of time fine tuning the scoring functions... so keep that in mind if you're interested in scoring as an alternative to behavior trees.

Once the queue is sorted, the final thing to happen is one thought takes place. You can see that, below, the function do_thought doesn't do anything except throw an error. That's because the class AIHandler isn't actually the NPC's true "brain". Rather, for the Onyx Golem, the class AIPawn is the actual "brain". AIPawn is a class that overrides some of the functions of AIHandler.

For AIPawn, this is the thought function.


I have the option to force it to think about something immediately, although this overrides the priority scoring system which defeats the purpose of the model... so it's an option I don't plan to use unless absolutely necessary (performance issues, for instance). If there are no thoughts then the standard behavior is called... which is basically "wait and, every so often, look around".

When the golem thinks to look around, a "scan" task is passed into the brain. If it sees the player, a "foundplayer" task is passed into the brain... and so on.

So my workflow will basically go like this: If I want the NPC to react to something, I make a call to the function "think" with a new AITask instance. I load relevant data into the AITask and then add the appropriate functions to any brain classes (e.g. AIPawn) so that brain knows what to do with that thought. All brains don't have to know what to do with all thoughts. Uninterpreted thoughts go ignored.

merrak

  • *
  • Posts: 2006
More AI Fun! When it works, AI programming has been a lot of fun. When it doesn't, though... ugh. To bring a bit of sanity into development, I have each NPC "tell me" (via log output) what they're doing, what they're thinking, and where they're going. When something goes wrong, I comb through the logs and try to pick apart the train of thought.

On the plus side, my NPCs are talking to me, which is pretty cool  :D I came up with the idea of having some of these log outputs translate into sounds, to give the illusion the player is listening in on their conversations. I'm going to borrow an idea I read some time ago on the forums here (I can't remember what post, though). Each sound will play on two channels: one 'normal' and one with some echo/reverb effect. By mixing the volumes of the two channels, I can give the effect of distance in a large, open chamber. This way the player can hear the NPCs relaying messages to each other.

So what's new? I improved the AIPawn (Onyx Golem) by giving it some limitations. The golem can only see Marika if its facing her. In their resting state, the golems will turn around every so often, but the crafty player can sneak Marika around their lines of sight. If the golems spot Marika, they'll give chase. It's possible for them to lose sight, in which case they'll scan frequently for a while before giving up.

So now the player can take advantage of long, thin corridors, objects in the room, and other obstacles. I haven't decided yet if the AIPawn golem should be able to see Marika if she's in a shadow, but I'm leaning toward no. Advanced golems might have the ability to "see her" using sound, but Marika starts out very weak at Level 1, only able to take about 3 or 4 hits, so she needs some advantages to survive.


AIPawn golems can be stunned pretty easily, so there is a clear strategy if the player finds Marika in a room with multiple golems: try to hit each one once then finish them off. To counter that strategy, smarter golems should be able to defend their stunned friends... or retreat, if they sense they're losing.

In my test run (shown above), Marika was losing the direct fight by a wide margin. By running in a fast circle around the AIPawn golem, it lost sight of her and she was able to defeat it by stabbing it in the back while it was scanning.

I like the effect. I have simulated "processor time" so that weaker golems have weaker "CPUs" and don't process thoughts as quickly. The "CPU" speed of the golem has turned out to be one of its most important stats, maybe more so than armor or hitpoints. AIPawn golems do not react to stimuli very fast. Marika can easily out-maneuver them if she doesn't let them surround on her.

mdotedot

  • *
  • Posts: 1424
Quote
. I expect I'll need to spend a lot of time fine tuning the scoring functions... so keep that in mind if you're interested in scoring as an alternative to behavior trees.

You linked to an URL with the utility - score definition. I've read up on it and since then it was frequently on my mind. Very interesting thoughts !

You mentioned the sound-thing before and I think it is an awesome idea to 'listen in' on the golems. I'm very interested to hear the effect you aim for with the dual channels.

How do you plan on altering the sound by Marika?! I thought that was one of your other ideas?! What kind of GUI?



Hanging out in the Chat:  http://www.stencyl.com/chat/

Proud member of the League of Idiotic Stencylers! Doing things in Stencyl that probably shouldn't be done.

NickamonPoppytail

  • *
  • Posts: 933
On the plus side, my NPCs are talking to me, which is pretty cool  :D I came up with the idea of having some of these log outputs translate into sounds, to give the illusion the player is listening in on their conversations. I'm going to borrow an idea I read some time ago on the forums here (I can't remember what post, though). Each sound will play on two channels: one 'normal' and one with some echo/reverb effect. By mixing the volumes of the two channels, I can give the effect of distance in a large, open chamber. This way the player can hear the NPCs relaying messages to each other.

You mentioned the sound-thing before and I think it is an awesome idea to 'listen in' on the golems. I'm very interested to hear the effect you aim for with the dual channels.

I agree; this is a very interesting idea.

These all sound like great ideas, Merrak! Hopefully you won't have too many problems creating AI.
Currently developing Poppytail 5, Pixeltail and The Poppytales.

Email: nick.rosemarygames.poppytail@gmail.com

;)

merrak

  • *
  • Posts: 2006
You linked to an URL with the utility - score definition. I've read up on it and since then it was frequently on my mind. Very interesting thoughts !

You mentioned the sound-thing before and I think it is an awesome idea to 'listen in' on the golems. I'm very interested to hear the effect you aim for with the dual channels.

How do you plan on altering the sound by Marika?! I thought that was one of your other ideas?! What kind of GUI?

This is another useful article I just read. It talks a bit more in-depth on the scoring function link, something I just started really getting into. I think I've been approaching scoring wrong. Graham suggests approaching scoring as a calculation of expected value, where as I have a simpler ranking system (normal, important, urgent, immediate priorities).

Ultimately I'm going to need a hybrid of an expected value calculation and what I currently have. Once the NPC identifies an objective, the probability of success certainly needs to be taken into account--but I also have to have the NPCs follow orders from their superiors. This has proven to be more challenging than I originally thought.

To test my priority code, I set up a simple cheat that sends an "urgent" message to all NPCs to walk to position 1, 11. Once they reach that position, they resume their normal behavior.

Because an "urgent" instruction supersedes any "normal" level ones, the NPC started moving correctly. Hitting the cheat button caused any hunting (normal behavior) NPCs to give up and start walking to position 1, 11. But as soon as the NPC began its walk, it immediately gave up and resumed tracking the player. To fix this, I needed to add a system to keep track of the current task. I also needed a way to check that a task has been completed.

Happy to say that it works, and it's neat to watch. Once they reach their target, they resume hunting. But now there's a new problem--if too many of the NPCs are sent to position 1, 11, they get into a traffic jam. The NPCs who have reached the position block anyone else from entering. But because the other NPCs are trying to get into that position, the ones who have can't move. So now they're permanently jammed because none of them can think to give up... nor can they talk to each other to sort this out.

I don't think I'll need to solve that problem, but it does show the larger issue at hand: thinking of all the potential problems. I foresee a lot of debugging in my future :D

How do you plan on altering the sound by Marika?! I thought that was one of your other ideas?! What kind of GUI?

I don't remember... but I also don't think it's necessary to use dual channel sound for her since any sound she makes will be at the position of the player's "ears", hence 100% volume on the normal channel and 0% on the 'far' channel.

These all sound like great ideas, Merrak! Hopefully you won't have too many problems creating AI.

It's definitely getting more complex as I identify and add needed features. I don't think there's any way around that, but it's been an interesting problem to work on.

merrak

  • *
  • Posts: 2006
Cycle Challenge. Here's the latest problem I'm working on: get the Onxy Golem (AIPawn) to navigate through the corridor and up the stairs. The golem can see Marika, but there's not a direct path to her.


A common stumbling block new programmers run into is underestimating how many little problems need to be solved to complete a bigger problem. If that happens to describe you then don't feel bad--it can even throw off long-time programmers. Case in point: the stairs. If you've been following this thread for a while, then you've likely noticed the trend. When something goes wrong or is unexpectedly difficult, it's always the stairs.

For anyone reading this who hasn't read all the past posts, here's a brief run-down of how 3D in Idosra works. The map consists of tiles in a 3D grid. Each tile is a rectangular solid. Internally, the scene is represented in a 2D plane (the Stencyl scene). Each level's tiles lie in a different region of the plane, much like how separate floors are illustrated in the floor plan of a multi-story building. So when an actor climbs a set of stairs, at some point its position in the 2D plane needs to jump from one location to another.

So there's that. There's also collision detection--setting the correct z-coordinate as the actor walks up. Walking down is easy--just set the new (x,y) position and let gravity correct the z-position. Up is harder because gravity will pull the actor down through the stairs unless the z-position is set correct.

The best part, though, came after I got the stairs working. While testing the golem, I somehow got Marika to fall off the landing and land on top of its head. The golem's brain went completely nuts and crashed the game. It took me a while to get a sense of what's going wrong. Somehow it got stuck in an infinite loop of thoughts.

That's a hard problem to debug. What's looping? What is triggering the loop? And why isn't there a catch to break it? I thought of a neat trick I'm pretty proud of, though: using Floyd's Cycle Detection algorithm to identify the repetition in the series of thoughts.

The way "thinking" works is there is a master "Think" function that takes an "AITask" as an argument. AITask is essentially 'something to do', and has a few data fields, including a unique name. I added a log of recent thoughts and implemented Floyd's Cycle Detection to look through it. Any time a lot of thoughts are executed in a short time span, I scan through the thought log and look for cycles. If within a short time frame, a cycle is detected and the number of cycles exceeds some threshold, then it's likely an infinite loop. I then escape out of it and dump the thoughts to the logfile for inspection.

Success!

Not that having no problems at all wouldn't be better, but this will be a useful debugging tool. So what went wrong: 3D distance is used if the player is close enough to attack. If the player is not close enough, the golem starts walking. To check when to stop walking, 2D distance is used. Imagine the thought process...

"I'm not close enough (3D) to the player to attack, but I see the player, so I'll start walking. But oh, wait, I am close enough (2D), so now I'll ask myself, am I close enough to the player to attack?"

So I still don't have a working golem, but at least the problem has been identified, solved, and I can go onto the next... pathfinding.

mdotedot

  • *
  • Posts: 1424

Yesterday I watched a GDC talk on Mark of the Ninja. 

They talked about lots of interesting game design but also about A.I. :

Stupid AI (cut out work .. be clear about what the AI is going to do )

https://www.youtube.com/watch?v=A7ejh3YUbac  @ 10:00

Hanging out in the Chat:  http://www.stencyl.com/chat/

Proud member of the League of Idiotic Stencylers! Doing things in Stencyl that probably shouldn't be done.

merrak

  • *
  • Posts: 2006

Summary (because this went on a bit longer than I intended  :o). I write about the current AI model I'm working on. If you're interested in the result but not the technical details, I have a little video at the bottom and give a play-by-play of what the NPC is thinking.

Cycle Challenge Update. Done! 8) As I expected, there were a lot of unforeseen problems I'd have to solve to get this working. My AI model isn't as simple as it once was, but I think that's to be expected.

One of the biggest problems was how to handle long-term tasks. The first version of my AI model assumed every task could be immediately executed: "Look around", "See player", "Speak", etc. were all instantaneous actions. "Hunt player", however, is not. Hunting requires periodic updates, such as updating the path followed if the target moves.

The other problem is just getting used to this kind of programming. Although bound to be more complex, writing a series of if/then statements still seems more natural. An if/then statement forces all other thoughts to be ignored, which isn't what the model specifies. Any time the "brain" thinks, it should always prioritize the highest scoring thought.

Model Details. Here's how it works (so far). There are three critical functions, the first of which is "think", which is called on every NPC update (on every engine tick, much like in an "always" event). The think() function processes "thoughts" in the form of a class AITask. AITask contains some data relevant to the thought and its own think() function (more on that later).

The second central function is nextTaskFromQueue. Imagine incoming thoughts being stored in a long list for processing. The job of the nextTaskFromQueue() function is to score each task and identify the task to be executed. This is where task scoring takes place. The task's score isn't a constant. Rather, the score is the output of a function that determines how important the task is at any particular moment in time.

Here's the psuedo code for these functions. They're pretty short, since most of the work is done in the third function. Note that 'current task' refers to any long-term task (such as hunting) that might need to be updated.

Code: [Select]
think( )
1. if brain is frozen, then stop
2. loop through all the tasks in the queue and check if they're done. Remove the ones that are.
3. if the current task is done, clear it
4. put any incoming thoughts in the queue
5. sort the queue by priority
6. execute the current thought
7. contemplate the current task

Long term tasks are assigned a function that returns true if they're complete and false if they're not. This "is done function" is also responsible for updates--say, if the NPC is hunting and needs to update its path. If the NPC had reached its target, the "is done function" returns the expected 'true'. If the NPC had not yet reached its target, the "is done function" would check if the path needs to be updated, and return "false" if, after updating, the NPC should continue hunting.

Step 6 is the third critical function and arguably the most important step. Each type of "brain" has a library of tasks it knows how to respond to. In the case of AIPawn, it has a small, but growing, vocabulary: "arrived", "scan", "walkscan", "strikePlayer", "huntPlayer", "gotoPlayer", "hunt-evaluate", "goto". So these are things the NPC can be told to do and respond to.

The newest addition to my model is Step 7--contemplating a task. This step solves the problem of handling interruptions to a long term task, such as hunting. Much like how the "brain" has its own AITask queue, each AITask can also have its own queue of sub tasks. What contemplation does will make more sense with the psuedo code for the "thought" function (Step 6)

Code: [Select]
do_thought()
1. set task = nextTaskFromQueue()
2. if task is still null, execute the "standard thought" (usually something like look around)
3. if task is not the current task:
    3A. if the task has an "is done function", then it is a long-term task. Set current task = task
    3B. check the thought record for infinite looping (error AI00)
    3C. execute the task using the AIPawn (or whatever other appropriate) library

Code: [Select]
nextTaskFromQueue()
1. set task = current task
2. if the queue is empty, return task
3. for every task T in the queue
    3A. if T is not ready (has a timer delay), then update the delay timer and skip it.
    3B. if T is ready and task is null or the priority of T exceeds the priority of task then remove T from the queue and return T.
        Hence, if there is a current task, but T is higher priority, then the current task is interrupted.

In short, the function do_thought plucks a thought from the queue and executes it. Step 7 does the same thing, although picking from the current task's queue instead of the "master queue".

What all this does is allow me to program the NPC to routinely update its current task with only two function calls. For example, here's how I get the NPC to start hunting.

Code: [Select]
var tv:VActor = recall( "targetVA" );
if ( tv != null )
{
    think( new AITask( this, "goto", 0, huntScore, huntCheck, null, [ tv.gx, tv.gy, tv.gz ] ) );
    contemplate( currentTask, new AITask( this, "hunt-evaluate", 100, whileHuntingScore, huntCheck, null, currentTask.parameters ) );
}

In line 1, I ask if the NPC remembers its target. The function "remember" stores a variable in memory, and "recall" retrieves it... so if the NPC had previously spotted the player, it would remember who it was. This memory feature works like game attributes: each variable is given a string name.

In line 4, a new AITask "thought" is created, with the instruction "goto" and then three coordinates of the player. There are also two other functions: huntScore, which returns the priority score of the hunting task, and huntCheck that updates the path and terminates the task when the NPC has reached its target.

In line 5, I add another thought which is tacked onto "goto"'s queue. It has a time delay of 100 ticks. When that timer expires, the task "hunt-evaluate" is executed. In this task, the NPC considers if it should keep hunting or, in the future, talk to other NPCs and consider a new plan. The current task is then cleared and the hunting process restarted, thus forcing a new path or terminating the hunting process altogether if the player has been lost.

So how does all this come together? Here's my test run.

<a href="https://www.youtube.com/v/EPvsz0CE-bo" target="_blank" class="new_win">https://www.youtube.com/v/EPvsz0CE-bo</a>

I added the Onyx Golem's "speech" for debugging purposes, to show what it is thinking. It won't be part of the final game and, in fact, is getting too spammy to even be useful to me. Everything the golem says is logged, though, so I can try to pinpoint where things break down.

At the start, the golem is just looking around. It happens to see Marika at about the same time she jumps off the stairs.  The golem strikes, but there isn't any kind of "battle" event in its AI library. So after it strikes, it can no longer see Marika and returns to its usual "wait and scan around" mode.

At the ~0:16 mark Marika tries to sneak up on it, but it just happens to turn around and see her. It strikes again, and since it can still see her, begins to give chase.

At around the 0:34 mark Marika jumps off the ledge and out of the golem's view. It is programmed to look around more frantically if it loses sight of her while hunting, so it turns to a few random angles then spots her again. It then jumps off the edge. At around 0:36 it bumps into the wall and gets confused. This is where you can see the "hunt-evaluate" contemplation event take place. The golem re-evaluates its path and finds a way around the void.

If you look at the orange text at 0:37, you can see the AIPawn library parse the various events. The line "I'm going to find you... Marika!" is the golem's response to having set a target in its memory. The next line ("I'm setting a path to...") is the beginning of the "goto" task. The last line, "I'm going to find you... Marika!", is the signal the hunt-evaluate task has executed and reset hunting to a valid path. Its at this point the golem finds a way around the void and resumes its hunt.

Yesterday I watched a GDC talk on Mark of the Ninja. 
They talked about lots of interesting game design but also about A.I. :
Stupid AI (cut out work .. be clear about what the AI is going to do )
https://www.youtube.com/watch?v=A7ejh3YUbac  @ 10:00

I just noticed your reply when I hit "preview" writing this post... and the little red warning came up :D Thanks for the link, though. It looks interesting so far, but I'll have to watch the rest tomorrow after getting some sleep.

« Last Edit: July 11, 2018, 11:37:41 pm by merrak »

Bombini

  • *
  • Posts: 951
This is very interesting!
Thanks for sharing this.

I was reading a book lately "Jagged Alliance 2 by Darius Kazemi" which also talks a lot about the AI of the game which fits a bit to your post.

Here is an example which might be interesting:

"Determine which attack against which target has the greatest attack value. [Ignore a soldier if:] this merc is inactive, at base, on assignment, or dead this man is neutral / on the same side, he’s not an opponent this opponent is not currently in sight (ignore known but unseen!)

Calculate minimum action points required to shoot at this opponent. If we don’t have enough APs left to shoot even a snapshot at this guy, [ignore them]. Calculate chance to get through the opponent’s cover (if any). If we can’t possibly get through all the cover, [ignore them]. Calculate next attack’s minimum shooting cost. (Excludes readying and turning.) Calculate the maximum possible aiming time. Consider the various aiming times: If aiming for [a given] amount of time produces [the best] hit rate, [use that aiming time]. If we can’t get any kind of hit rate at all [with our aim, ignore this opponent]. Calculate chance to REALLY hit: Shoot accurately AND get past cover. If we can’t REALLY hit at all, [ignore this opponent]. Really limit knife throwing so it doesn’t look wrong. Don’t bother [with knives unless it’s a really great choice]. Calculate this opponent’s threat value. (Factor in my cover from him.) Estimate the damage this shot would do to this opponent. Calculate the combined “attack value” for this opponent: The highest possible value before division should be about 1.8 billion, normal value before division should be about 5 million. If we can hurt the guy, OR probably not, but at least it’s our best chance to actually hit him and maybe scare him, knock him down, etc., [then we have a viable target]. If there already was another viable target, how does our chance to hit him compare to the previous best one? If this chance to really hit is more than 50% worse, and the other guy is conscious at all, then stick with the older guy as the better target. If the chance to really hit is between 50% worse to 50% better, then the one with the higher ATTACK VALUE is the better target since he’s more dangerous. In our scenario with eight enemies and six player mercenaries, we have fourteen characters total on the map, meaning every turn the decision tree listed above is run 112 times. The JA2 tactical AI contains dozens of functions of this level of complexity. As AI programmer Alex Meduna notes,“ Our game has a reputation for being a bit of a beast!”



merrak

  • *
  • Posts: 2006
Yesterday I watched a GDC talk on Mark of the Ninja. 
They talked about lots of interesting game design but also about A.I. :
Stupid AI (cut out work .. be clear about what the AI is going to do )
https://www.youtube.com/watch?v=A7ejh3YUbac  @ 10:00

A lot of good points in the talk. I'd say I'm at what they called the "testing hypotheses" stage, seeing what works and what doesn't as far as gameplay mechanics. As for AI, this necessitates a general toolset, which is where I'm at. I asked myself, "what are the essential components I need for NPC AI?" and identified basic features: line of sight, pathfinding, and decision making. I then asked myself what are the core commands I would give to an NPC. The command I'm working on is "goto (gx,gy,gz)".

They said their game was about planning and executing, so I think the "stupid AI/predictable" makes sense. I'm envisioning a game about exploration. In particular, I want to present the player with a complex machine and task them with learning how it works.

This is very interesting!
Thanks for sharing this.

I was reading a book lately "Jagged Alliance 2 by Darius Kazemi" which also talks a lot about the AI of the game which fits a bit to your post.

Here is an example which might be interesting:...

It sounds like they might be doing something similar, computing a "score" in the form of an expected value... or maybe using a complex behavior tree instead. I'm not sure from the excerpt. It'd be interesting to know how some other strategy games go about AI.