Merrak's Isometric Adventures -- Artificial Intelligence!

merrak

  • *
  • Posts: 2235
Merrak's Artificial Intelligence Adventures. I suppose the thread title is misnamed, but that's okay   :D I now have simple AI working.


So what's "simple"? The Onyx Golem pretty much behaves like one would expect. When it sees the player character, it walks toward her. When it's close, it attacks. Pathfinding is implemented, but not utilized because the golem is only awake when it's in direct line of sight of the player.

Nothing too special, but it does demonstrate the framework that powers this simple AI is functional. So now I have the green light to start digging deeper.

The best part about the design is that, although I chose to use typed Haxe code, everything designed so far could be done using code blocks and custom events. In fact, custom events and behaviors may be a more natural way to implement the design... so I'll outline it if anyone wants to play around with the idea. Here's how it works.

Central to the AI design is the function "think". The "think" function is a member of the class "AIHandler" which is what is referred to as the NPC's "brain". A call to the function brain.think( ), as illustrated in my AI opening post at the top of Page 25, performs one thinking iteration. This is one iteration:


So you can see three things happen. First, a "thought" is an instance of the class AITask, which contains some data relevant to what the thought is about. If I want the NPC to think about a thought, I pass it as an argument to the think( ) function. But that thought is not automatically parsed. Instead, it is inserted into a queue.

The next thing that happens is that the thought queue is sorted by priority. This is effectively my "scorer", borrowing language from the Utility AI model (see my previous post). Every instance of AITask includes a scoring function that will assign a priority to that thought. For example, one thought might be to "retreat and heal". That thought would have a higher priority if the NPC were low on HPs. But then again... maybe the player is even lower on HPs. Then "continue to fight" might be assigned an even higher priority. So I'm spared having to build and manage a complex behavior tree--although this does come with a cost. I expect I'll need to spend a lot of time fine tuning the scoring functions... so keep that in mind if you're interested in scoring as an alternative to behavior trees.

Once the queue is sorted, the final thing to happen is one thought takes place. You can see that, below, the function do_thought doesn't do anything except throw an error. That's because the class AIHandler isn't actually the NPC's true "brain". Rather, for the Onyx Golem, the class AIPawn is the actual "brain". AIPawn is a class that overrides some of the functions of AIHandler.

For AIPawn, this is the thought function.


I have the option to force it to think about something immediately, although this overrides the priority scoring system which defeats the purpose of the model... so it's an option I don't plan to use unless absolutely necessary (performance issues, for instance). If there are no thoughts then the standard behavior is called... which is basically "wait and, every so often, look around".

When the golem thinks to look around, a "scan" task is passed into the brain. If it sees the player, a "foundplayer" task is passed into the brain... and so on.

So my workflow will basically go like this: If I want the NPC to react to something, I make a call to the function "think" with a new AITask instance. I load relevant data into the AITask and then add the appropriate functions to any brain classes (e.g. AIPawn) so that brain knows what to do with that thought. All brains don't have to know what to do with all thoughts. Uninterpreted thoughts go ignored.

merrak

  • *
  • Posts: 2235
More AI Fun! When it works, AI programming has been a lot of fun. When it doesn't, though... ugh. To bring a bit of sanity into development, I have each NPC "tell me" (via log output) what they're doing, what they're thinking, and where they're going. When something goes wrong, I comb through the logs and try to pick apart the train of thought.

On the plus side, my NPCs are talking to me, which is pretty cool  :D I came up with the idea of having some of these log outputs translate into sounds, to give the illusion the player is listening in on their conversations. I'm going to borrow an idea I read some time ago on the forums here (I can't remember what post, though). Each sound will play on two channels: one 'normal' and one with some echo/reverb effect. By mixing the volumes of the two channels, I can give the effect of distance in a large, open chamber. This way the player can hear the NPCs relaying messages to each other.

So what's new? I improved the AIPawn (Onyx Golem) by giving it some limitations. The golem can only see Marika if its facing her. In their resting state, the golems will turn around every so often, but the crafty player can sneak Marika around their lines of sight. If the golems spot Marika, they'll give chase. It's possible for them to lose sight, in which case they'll scan frequently for a while before giving up.

So now the player can take advantage of long, thin corridors, objects in the room, and other obstacles. I haven't decided yet if the AIPawn golem should be able to see Marika if she's in a shadow, but I'm leaning toward no. Advanced golems might have the ability to "see her" using sound, but Marika starts out very weak at Level 1, only able to take about 3 or 4 hits, so she needs some advantages to survive.


AIPawn golems can be stunned pretty easily, so there is a clear strategy if the player finds Marika in a room with multiple golems: try to hit each one once then finish them off. To counter that strategy, smarter golems should be able to defend their stunned friends... or retreat, if they sense they're losing.

In my test run (shown above), Marika was losing the direct fight by a wide margin. By running in a fast circle around the AIPawn golem, it lost sight of her and she was able to defeat it by stabbing it in the back while it was scanning.

I like the effect. I have simulated "processor time" so that weaker golems have weaker "CPUs" and don't process thoughts as quickly. The "CPU" speed of the golem has turned out to be one of its most important stats, maybe more so than armor or hitpoints. AIPawn golems do not react to stimuli very fast. Marika can easily out-maneuver them if she doesn't let them surround on her.

mdotedot

  • *
  • Posts: 1500
Quote
. I expect I'll need to spend a lot of time fine tuning the scoring functions... so keep that in mind if you're interested in scoring as an alternative to behavior trees.

You linked to an URL with the utility - score definition. I've read up on it and since then it was frequently on my mind. Very interesting thoughts !

You mentioned the sound-thing before and I think it is an awesome idea to 'listen in' on the golems. I'm very interested to hear the effect you aim for with the dual channels.

How do you plan on altering the sound by Marika?! I thought that was one of your other ideas?! What kind of GUI?



Hanging out in the Chat:  http://www.stencyl.com/chat/

Proud member of the League of Idiotic Stencylers! Doing things in Stencyl that probably shouldn't be done.

NickamonPoppytail

  • *
  • Posts: 1039
On the plus side, my NPCs are talking to me, which is pretty cool  :D I came up with the idea of having some of these log outputs translate into sounds, to give the illusion the player is listening in on their conversations. I'm going to borrow an idea I read some time ago on the forums here (I can't remember what post, though). Each sound will play on two channels: one 'normal' and one with some echo/reverb effect. By mixing the volumes of the two channels, I can give the effect of distance in a large, open chamber. This way the player can hear the NPCs relaying messages to each other.

You mentioned the sound-thing before and I think it is an awesome idea to 'listen in' on the golems. I'm very interested to hear the effect you aim for with the dual channels.

I agree; this is a very interesting idea.

These all sound like great ideas, Merrak! Hopefully you won't have too many problems creating AI.


RIP Pirate Wray
April 2002-September 2018

merrak

  • *
  • Posts: 2235
You linked to an URL with the utility - score definition. I've read up on it and since then it was frequently on my mind. Very interesting thoughts !

You mentioned the sound-thing before and I think it is an awesome idea to 'listen in' on the golems. I'm very interested to hear the effect you aim for with the dual channels.

How do you plan on altering the sound by Marika?! I thought that was one of your other ideas?! What kind of GUI?

This is another useful article I just read. It talks a bit more in-depth on the scoring function link, something I just started really getting into. I think I've been approaching scoring wrong. Graham suggests approaching scoring as a calculation of expected value, where as I have a simpler ranking system (normal, important, urgent, immediate priorities).

Ultimately I'm going to need a hybrid of an expected value calculation and what I currently have. Once the NPC identifies an objective, the probability of success certainly needs to be taken into account--but I also have to have the NPCs follow orders from their superiors. This has proven to be more challenging than I originally thought.

To test my priority code, I set up a simple cheat that sends an "urgent" message to all NPCs to walk to position 1, 11. Once they reach that position, they resume their normal behavior.

Because an "urgent" instruction supersedes any "normal" level ones, the NPC started moving correctly. Hitting the cheat button caused any hunting (normal behavior) NPCs to give up and start walking to position 1, 11. But as soon as the NPC began its walk, it immediately gave up and resumed tracking the player. To fix this, I needed to add a system to keep track of the current task. I also needed a way to check that a task has been completed.

Happy to say that it works, and it's neat to watch. Once they reach their target, they resume hunting. But now there's a new problem--if too many of the NPCs are sent to position 1, 11, they get into a traffic jam. The NPCs who have reached the position block anyone else from entering. But because the other NPCs are trying to get into that position, the ones who have can't move. So now they're permanently jammed because none of them can think to give up... nor can they talk to each other to sort this out.

I don't think I'll need to solve that problem, but it does show the larger issue at hand: thinking of all the potential problems. I foresee a lot of debugging in my future :D

How do you plan on altering the sound by Marika?! I thought that was one of your other ideas?! What kind of GUI?

I don't remember... but I also don't think it's necessary to use dual channel sound for her since any sound she makes will be at the position of the player's "ears", hence 100% volume on the normal channel and 0% on the 'far' channel.

These all sound like great ideas, Merrak! Hopefully you won't have too many problems creating AI.

It's definitely getting more complex as I identify and add needed features. I don't think there's any way around that, but it's been an interesting problem to work on.

merrak

  • *
  • Posts: 2235
Cycle Challenge. Here's the latest problem I'm working on: get the Onxy Golem (AIPawn) to navigate through the corridor and up the stairs. The golem can see Marika, but there's not a direct path to her.


A common stumbling block new programmers run into is underestimating how many little problems need to be solved to complete a bigger problem. If that happens to describe you then don't feel bad--it can even throw off long-time programmers. Case in point: the stairs. If you've been following this thread for a while, then you've likely noticed the trend. When something goes wrong or is unexpectedly difficult, it's always the stairs.

For anyone reading this who hasn't read all the past posts, here's a brief run-down of how 3D in Idosra works. The map consists of tiles in a 3D grid. Each tile is a rectangular solid. Internally, the scene is represented in a 2D plane (the Stencyl scene). Each level's tiles lie in a different region of the plane, much like how separate floors are illustrated in the floor plan of a multi-story building. So when an actor climbs a set of stairs, at some point its position in the 2D plane needs to jump from one location to another.

So there's that. There's also collision detection--setting the correct z-coordinate as the actor walks up. Walking down is easy--just set the new (x,y) position and let gravity correct the z-position. Up is harder because gravity will pull the actor down through the stairs unless the z-position is set correct.

The best part, though, came after I got the stairs working. While testing the golem, I somehow got Marika to fall off the landing and land on top of its head. The golem's brain went completely nuts and crashed the game. It took me a while to get a sense of what's going wrong. Somehow it got stuck in an infinite loop of thoughts.

That's a hard problem to debug. What's looping? What is triggering the loop? And why isn't there a catch to break it? I thought of a neat trick I'm pretty proud of, though: using Floyd's Cycle Detection algorithm to identify the repetition in the series of thoughts.

The way "thinking" works is there is a master "Think" function that takes an "AITask" as an argument. AITask is essentially 'something to do', and has a few data fields, including a unique name. I added a log of recent thoughts and implemented Floyd's Cycle Detection to look through it. Any time a lot of thoughts are executed in a short time span, I scan through the thought log and look for cycles. If within a short time frame, a cycle is detected and the number of cycles exceeds some threshold, then it's likely an infinite loop. I then escape out of it and dump the thoughts to the logfile for inspection.

Success!

Not that having no problems at all wouldn't be better, but this will be a useful debugging tool. So what went wrong: 3D distance is used if the player is close enough to attack. If the player is not close enough, the golem starts walking. To check when to stop walking, 2D distance is used. Imagine the thought process...

"I'm not close enough (3D) to the player to attack, but I see the player, so I'll start walking. But oh, wait, I am close enough (2D), so now I'll ask myself, am I close enough to the player to attack?"

So I still don't have a working golem, but at least the problem has been identified, solved, and I can go onto the next... pathfinding.

mdotedot

  • *
  • Posts: 1500

Yesterday I watched a GDC talk on Mark of the Ninja. 

They talked about lots of interesting game design but also about A.I. :

Stupid AI (cut out work .. be clear about what the AI is going to do )

https://www.youtube.com/watch?v=A7ejh3YUbac  @ 10:00

Hanging out in the Chat:  http://www.stencyl.com/chat/

Proud member of the League of Idiotic Stencylers! Doing things in Stencyl that probably shouldn't be done.

merrak

  • *
  • Posts: 2235

Summary (because this went on a bit longer than I intended  :o). I write about the current AI model I'm working on. If you're interested in the result but not the technical details, I have a little video at the bottom and give a play-by-play of what the NPC is thinking.

Cycle Challenge Update. Done! 8) As I expected, there were a lot of unforeseen problems I'd have to solve to get this working. My AI model isn't as simple as it once was, but I think that's to be expected.

One of the biggest problems was how to handle long-term tasks. The first version of my AI model assumed every task could be immediately executed: "Look around", "See player", "Speak", etc. were all instantaneous actions. "Hunt player", however, is not. Hunting requires periodic updates, such as updating the path followed if the target moves.

The other problem is just getting used to this kind of programming. Although bound to be more complex, writing a series of if/then statements still seems more natural. An if/then statement forces all other thoughts to be ignored, which isn't what the model specifies. Any time the "brain" thinks, it should always prioritize the highest scoring thought.

Model Details. Here's how it works (so far). There are three critical functions, the first of which is "think", which is called on every NPC update (on every engine tick, much like in an "always" event). The think() function processes "thoughts" in the form of a class AITask. AITask contains some data relevant to the thought and its own think() function (more on that later).

The second central function is nextTaskFromQueue. Imagine incoming thoughts being stored in a long list for processing. The job of the nextTaskFromQueue() function is to score each task and identify the task to be executed. This is where task scoring takes place. The task's score isn't a constant. Rather, the score is the output of a function that determines how important the task is at any particular moment in time.

Here's the psuedo code for these functions. They're pretty short, since most of the work is done in the third function. Note that 'current task' refers to any long-term task (such as hunting) that might need to be updated.

Code: [Select]
think( )
1. if brain is frozen, then stop
2. loop through all the tasks in the queue and check if they're done. Remove the ones that are.
3. if the current task is done, clear it
4. put any incoming thoughts in the queue
5. sort the queue by priority
6. execute the current thought
7. contemplate the current task

Long term tasks are assigned a function that returns true if they're complete and false if they're not. This "is done function" is also responsible for updates--say, if the NPC is hunting and needs to update its path. If the NPC had reached its target, the "is done function" returns the expected 'true'. If the NPC had not yet reached its target, the "is done function" would check if the path needs to be updated, and return "false" if, after updating, the NPC should continue hunting.

Step 6 is the third critical function and arguably the most important step. Each type of "brain" has a library of tasks it knows how to respond to. In the case of AIPawn, it has a small, but growing, vocabulary: "arrived", "scan", "walkscan", "strikePlayer", "huntPlayer", "gotoPlayer", "hunt-evaluate", "goto". So these are things the NPC can be told to do and respond to.

The newest addition to my model is Step 7--contemplating a task. This step solves the problem of handling interruptions to a long term task, such as hunting. Much like how the "brain" has its own AITask queue, each AITask can also have its own queue of sub tasks. What contemplation does will make more sense with the psuedo code for the "thought" function (Step 6)

Code: [Select]
do_thought()
1. set task = nextTaskFromQueue()
2. if task is still null, execute the "standard thought" (usually something like look around)
3. if task is not the current task:
    3A. if the task has an "is done function", then it is a long-term task. Set current task = task
    3B. check the thought record for infinite looping (error AI00)
    3C. execute the task using the AIPawn (or whatever other appropriate) library

Code: [Select]
nextTaskFromQueue()
1. set task = current task
2. if the queue is empty, return task
3. for every task T in the queue
    3A. if T is not ready (has a timer delay), then update the delay timer and skip it.
    3B. if T is ready and task is null or the priority of T exceeds the priority of task then remove T from the queue and return T.
        Hence, if there is a current task, but T is higher priority, then the current task is interrupted.

In short, the function do_thought plucks a thought from the queue and executes it. Step 7 does the same thing, although picking from the current task's queue instead of the "master queue".

What all this does is allow me to program the NPC to routinely update its current task with only two function calls. For example, here's how I get the NPC to start hunting.

Code: [Select]
var tv:VActor = recall( "targetVA" );
if ( tv != null )
{
    think( new AITask( this, "goto", 0, huntScore, huntCheck, null, [ tv.gx, tv.gy, tv.gz ] ) );
    contemplate( currentTask, new AITask( this, "hunt-evaluate", 100, whileHuntingScore, huntCheck, null, currentTask.parameters ) );
}

In line 1, I ask if the NPC remembers its target. The function "remember" stores a variable in memory, and "recall" retrieves it... so if the NPC had previously spotted the player, it would remember who it was. This memory feature works like game attributes: each variable is given a string name.

In line 4, a new AITask "thought" is created, with the instruction "goto" and then three coordinates of the player. There are also two other functions: huntScore, which returns the priority score of the hunting task, and huntCheck that updates the path and terminates the task when the NPC has reached its target.

In line 5, I add another thought which is tacked onto "goto"'s queue. It has a time delay of 100 ticks. When that timer expires, the task "hunt-evaluate" is executed. In this task, the NPC considers if it should keep hunting or, in the future, talk to other NPCs and consider a new plan. The current task is then cleared and the hunting process restarted, thus forcing a new path or terminating the hunting process altogether if the player has been lost.

So how does all this come together? Here's my test run.

<a href="https://www.youtube.com/v/EPvsz0CE-bo" target="_blank" class="new_win">https://www.youtube.com/v/EPvsz0CE-bo</a>

I added the Onyx Golem's "speech" for debugging purposes, to show what it is thinking. It won't be part of the final game and, in fact, is getting too spammy to even be useful to me. Everything the golem says is logged, though, so I can try to pinpoint where things break down.

At the start, the golem is just looking around. It happens to see Marika at about the same time she jumps off the stairs.  The golem strikes, but there isn't any kind of "battle" event in its AI library. So after it strikes, it can no longer see Marika and returns to its usual "wait and scan around" mode.

At the ~0:16 mark Marika tries to sneak up on it, but it just happens to turn around and see her. It strikes again, and since it can still see her, begins to give chase.

At around the 0:34 mark Marika jumps off the ledge and out of the golem's view. It is programmed to look around more frantically if it loses sight of her while hunting, so it turns to a few random angles then spots her again. It then jumps off the edge. At around 0:36 it bumps into the wall and gets confused. This is where you can see the "hunt-evaluate" contemplation event take place. The golem re-evaluates its path and finds a way around the void.

If you look at the orange text at 0:37, you can see the AIPawn library parse the various events. The line "I'm going to find you... Marika!" is the golem's response to having set a target in its memory. The next line ("I'm setting a path to...") is the beginning of the "goto" task. The last line, "I'm going to find you... Marika!", is the signal the hunt-evaluate task has executed and reset hunting to a valid path. Its at this point the golem finds a way around the void and resumes its hunt.

Yesterday I watched a GDC talk on Mark of the Ninja. 
They talked about lots of interesting game design but also about A.I. :
Stupid AI (cut out work .. be clear about what the AI is going to do )
https://www.youtube.com/watch?v=A7ejh3YUbac  @ 10:00

I just noticed your reply when I hit "preview" writing this post... and the little red warning came up :D Thanks for the link, though. It looks interesting so far, but I'll have to watch the rest tomorrow after getting some sleep.

« Last Edit: July 11, 2018, 11:37:41 pm by merrak »

Bombini

  • *
  • Posts: 1129
This is very interesting!
Thanks for sharing this.

I was reading a book lately "Jagged Alliance 2 by Darius Kazemi" which also talks a lot about the AI of the game which fits a bit to your post.

Here is an example which might be interesting:

"Determine which attack against which target has the greatest attack value. [Ignore a soldier if:] this merc is inactive, at base, on assignment, or dead this man is neutral / on the same side, he’s not an opponent this opponent is not currently in sight (ignore known but unseen!)

Calculate minimum action points required to shoot at this opponent. If we don’t have enough APs left to shoot even a snapshot at this guy, [ignore them]. Calculate chance to get through the opponent’s cover (if any). If we can’t possibly get through all the cover, [ignore them]. Calculate next attack’s minimum shooting cost. (Excludes readying and turning.) Calculate the maximum possible aiming time. Consider the various aiming times: If aiming for [a given] amount of time produces [the best] hit rate, [use that aiming time]. If we can’t get any kind of hit rate at all [with our aim, ignore this opponent]. Calculate chance to REALLY hit: Shoot accurately AND get past cover. If we can’t REALLY hit at all, [ignore this opponent]. Really limit knife throwing so it doesn’t look wrong. Don’t bother [with knives unless it’s a really great choice]. Calculate this opponent’s threat value. (Factor in my cover from him.) Estimate the damage this shot would do to this opponent. Calculate the combined “attack value” for this opponent: The highest possible value before division should be about 1.8 billion, normal value before division should be about 5 million. If we can hurt the guy, OR probably not, but at least it’s our best chance to actually hit him and maybe scare him, knock him down, etc., [then we have a viable target]. If there already was another viable target, how does our chance to hit him compare to the previous best one? If this chance to really hit is more than 50% worse, and the other guy is conscious at all, then stick with the older guy as the better target. If the chance to really hit is between 50% worse to 50% better, then the one with the higher ATTACK VALUE is the better target since he’s more dangerous. In our scenario with eight enemies and six player mercenaries, we have fourteen characters total on the map, meaning every turn the decision tree listed above is run 112 times. The JA2 tactical AI contains dozens of functions of this level of complexity. As AI programmer Alex Meduna notes,“ Our game has a reputation for being a bit of a beast!”



merrak

  • *
  • Posts: 2235
Yesterday I watched a GDC talk on Mark of the Ninja. 
They talked about lots of interesting game design but also about A.I. :
Stupid AI (cut out work .. be clear about what the AI is going to do )
https://www.youtube.com/watch?v=A7ejh3YUbac  @ 10:00

A lot of good points in the talk. I'd say I'm at what they called the "testing hypotheses" stage, seeing what works and what doesn't as far as gameplay mechanics. As for AI, this necessitates a general toolset, which is where I'm at. I asked myself, "what are the essential components I need for NPC AI?" and identified basic features: line of sight, pathfinding, and decision making. I then asked myself what are the core commands I would give to an NPC. The command I'm working on is "goto (gx,gy,gz)".

They said their game was about planning and executing, so I think the "stupid AI/predictable" makes sense. I'm envisioning a game about exploration. In particular, I want to present the player with a complex machine and task them with learning how it works.

This is very interesting!
Thanks for sharing this.

I was reading a book lately "Jagged Alliance 2 by Darius Kazemi" which also talks a lot about the AI of the game which fits a bit to your post.

Here is an example which might be interesting:...

It sounds like they might be doing something similar, computing a "score" in the form of an expected value... or maybe using a complex behavior tree instead. I'm not sure from the excerpt. It'd be interesting to know how some other strategy games go about AI.

merrak

  • *
  • Posts: 2235
Atomic AI. My AI model is coming along nicely. I realized "Utility AI" is a broad description and so my own model needs a more specific name. I decided to call it "Atomic AI"--which sounds cool, but also is a good description of the approach I'm taking.

All of the examples of Utility AI I've read about feature the same core component: scoring actions. But here's a good question: what is an "action"? For example, you could consider "attack player" as an action. Or, you could also consider "walk to player" and "swing weapon" as two separate actions. Both accomplish the same end result.

What I've done is thought about each goal an NPC can pursue and broken it down into simple actions--much like how molecules are broken down into simpler atoms (hence the name).

I'll get the disadvantage to this approach out of the way first: complexity. Getting sequential actions to trigger as intended has been a real challenge. Most of the challenge has been fixing bugs in the model's core procedures, so working with it will get easier as those bugs are found and corrected. But I'm beginning to think that, even with all those bugs removed, adding new features will always be more challenging than had I gone a different route. After all, getting lots of little mechanical pieces in a machine to work together correctly is challenging. I don't see why software should be different.

So why put up with it--I'm seeing two distinct advantages which make the challenge worth while. The first is realistic response. If my NPC is taking on a complex task and something more important comes along, it can easily be interrupted... as shown in this little demo.

<a href="https://www.youtube.com/v/mo-OrBvIKpI" target="_blank" class="new_win">https://www.youtube.com/v/mo-OrBvIKpI</a>

Here's a quick breakdown of what's going on (maybe I should get a microphone recording and narrate these)

The map is a 2x2 grid of rooms, with Marika in the northwest corner and a golem in the southeast corner. At 0:14 I press a special key that tells the golem to begin walking to tile position (4,4,1), which is about where Marika is standing. However, at the moment the golem does not know Marika is there. Rather, it's just told "hunt (pathfinding operation) to that spot". Hunting is a complex operation, but more on that later.

It takes a while for the NPC to get to its goal, and you can see it processing the task for the next few seconds. At 0:24 it arrives and declares "sectorPath length is 0", meaning all of the hunting steps have been processed.

Now the hunt command I gave the golem was given with an "urgent" priority, and so it won't bother doing anything lower priority (including looking for Marika) until its done. Once it reaches its goal, the default behaviors take over and the golem begins its usual task of pursuing our heroine.

Marika then starts walking to the position where the golem started, and the golem chases her. At 0:33 Marika is now at the golem's starting position and I issue the same "hunt to (4,4,1)" command. Since this command has higher priority, the golem immediately gives up chasing Marika and begins walking back to the northwest room. Marika follows just so we can watch the golem.

At 0:42 the golem arrives and resumes its standard behavior, then a fistfight occurs. The end!

The important thing here is that at any time something more important comes up, that thought is processed immediately. Despite the complexity of adding new behavior, this part is easy. Each action that makes up a larger task has its own score.

For a bit of realism, I could boost the priority of finishing the task at hand. The "golems" are robots, but a human might make the (potentially unwise) decision to finish what they're currently focused on at all costs.

The second advantage to the "Atomic AI" model is more substantial: CPU usage. Pathfinding is a good case.

One of the more common complaints about the AI Tools extension is that A* is slow. Unfortunately, there's not much I can do about that without making assumptions about what type of game the developer is making. Finding a path through a large map is expensive because there's a lot of searching to do. You can fix this by breaking the larger problem up into smaller pieces--like what I did for Idosra.

The golem sees the map in two different ways. First, it sees what is called the "sector-sector graph", showing how the sectors (rooms) connect to each other. It's very simple in this map.

o---o
|       |
o---o

It also sees the "in-sector graph", which shows how the individual tiles connect to each other.


The golem can move in eight directions, just like the player. Fun fact: to help keep Marika aligned with the NPCs, and also solve several other platforming and collision related problems, I implemented the Zelda 1-style grid that Justin mentioned in an earlier post in this thread. Because of the size of the tiles and perspective, it's hardly noticeable.

When the "hunt to (4,4,1)" command is issued, several things take place... as I'll illustrate below.


The first thing that happens is a path is found from Sector 3 (golem's room) to Sector 0 (Marika's room) within the sector-sector graph. The golem's "brain" then remembers the path:

{3, 1, 0}

Note that the path {3, 2, 0} could also have been chosen. The cost of each connection in the sector-sector graph is dynamic, so that I can increase the cost of paths that are in use and encourage multiple NPCs to use different routes (hence, surround the player)

Once the sector-sector path has been established, the golem then finds a path on the in-sector graph from its current position to the door leading out of its current sector. You may have noticed that there are breaks in the in-sector graph where the doors are. That's because my illustration isn't of one graph. Each sector has its own, separate, in-sector graph. When the golem reaches a doorway, it knows what direction to walk in to pass through the doorway into the next sector.

Once the golem enters the next sector (Sector 1), a new path is constructed from the south door to the west door.

When the golem makes it to the west door, it enters Sector 0 and a new path from the door to its final target, (4,4,1), is found.

The important thing about this approach is that it allows me to break down the larger problem of finding a path all the way through the map into smaller pieces. I don't want to solve the larger pathfinding problem because not only is it more costly, it's also unnecessary. If the target were Marika and not the static position (4,4,1), then consider the chances that Marika will be in the same position by the time the golem makes it all the way across the map.

If you're implementing pathfinding to a moving target, consider that your objective isn't necessarily to tell your NPC how to get all the way to the target. It only needs to be told the first few steps--after which, you'd need to run a new pathfinding computation.

With all this in mind, some of the things the golem "says" in the video clip should make more sense. At 0:14 the golem declares it found the sector-sector path and "a path to the goal" which would be a way out of Sector 3. This is represented as one action in the AI model: "start hunting".

The next thought is "I'm at target coordinates". The "target coordinates" don't refer to the final goal of position (4,4,1). Rather, the "target coordinates" refer to the next node in the path through Sector 3 from the starting position to the door. Every time the golem completes one step along its path, it thinks about the next. This is also its opportunity to receive higher priority instructions, such as changing the path, running away if critically injured, or whatever else I want to be more important.

Next... where to go from here. I've settled on making a playable demo a short-term goal. It'll probably be a bit underwhelming at such an early stage, but I like the idea of having a "checkpoint" anyway. The last playable demo I have is from 2015  :o  So... I think it's time to update it!

Bombini

  • *
  • Posts: 1129
Very interesting and fascinating!
I would love to play a demo and think its also an important step or checkpoint as you say for yourself.
You need input from others and milestones to keep the motivation up.

Cheers!

merrak

  • *
  • Posts: 2235
Very interesting and fascinating!
I would love to play a demo and think its also an important step or checkpoint as you say for yourself.
You need input from others and milestones to keep the motivation up.

Cheers!

Thanks! I'm shooting for end of summer. There's still one major missing feature: sound. Well, and the AI is still broken in a few ways. So that needs fixed. Also, inventory... and you have to quit the game if you die :P So lots to do!

Speaking of sound, I completely revamped the original "Jukebox" behavior that dates all the way back to the Stencyl Jam version of the game. "Jukebox" is a simple behavior that keeps a list of sounds that are currently playing and prevents too many instances of the sound from playing.

The upgraded version, "VJukebox", has some nifty features. I implemented a "VSound" class that consists of a collection of Sound objects (the standard Stencyl sound) and a "Multiplay" function that plays all the tracks in the VSound at different volumes.

The intended effect is to build on an idea I read about in a couple of different places: mixing sounds to create a distance effect. twotimingpete wrote about it in the Ghostsong Thread, and there was also a 3D sound emitter that Rimrook wrote about.

At the moment VJukebox is only configured to play sounds as a function of distance, but I left the implementation general enough that I can expand on it as needed.

This is the editor, showing how Track 0 (regular sound) fades quickly as distance from the source increases. Track 1 (distant sound) increases, then decreases.


The editor came together quickly, so I'm glad the effort I put into the UI library code has paid back some benefit.

Another nice benefit of VJukebox is that I have two dedicated music tracks, so I can fade tracks in and out of each other. Rather than have a soundtrack for each level, I'd like the music tracks to adjust to what is going on. One example I can think of, if you've played the original Alone in the Dark, it had a standard background track and then a couple special ones that'd trigger when something special (like a fight) occurred.

Since I have code which allows NPCs to observe the environment, one thing I can do is make a "conductor NPC" that has a special AI library dedicated to music. In Idosra, NPCs don't have to be associated with actors--so an NPC can be abstracted to an entity the player never sees. I just set up an NPC to sit in the background, monitor what's happening, and think about picking the appropriate music for the situation.

mdotedot

  • *
  • Posts: 1500
Quote
If you're implementing pathfinding to a moving target, consider that your objective isn't necessarily to tell your NPC how to get all the way to the target. It only needs to be told the first few steps--after which, you'd need to run a new pathfinding computation.
I have to remember this! That is quite clever!

The RimRook demo is impressive and it would be great if you could incorporate that into your game.

In the past you talked about sound and I had made another assumption about your idea then that you currently work at.
We discussed player interaction with the golems before and I still don't have a good understanding of how you want to do the 'alteration of communication' between the golems.
What you could do is make the golems ' blind ' and react on the movement of Marika. The faster she runs the more noise she makes and that could alarm the golems. You could turn it into a stealth operation.
The player can sneak up on one of the golems and when she is really close you could enable a point/click action on the golem to turn it off or to scramble its communication with others.
Just an idea. I don't know what direction you want to explore with this sound. But either way it is a cool feature!


 
Hanging out in the Chat:  http://www.stencyl.com/chat/

Proud member of the League of Idiotic Stencylers! Doing things in Stencyl that probably shouldn't be done.

merrak

  • *
  • Posts: 2235
I have to remember this! That is quite clever!

The RimRook demo is impressive and it would be great if you could incorporate that into your game.

In the past you talked about sound and I had made another assumption about your idea then that you currently work at.
We discussed player interaction with the golems before and I still don't have a good understanding of how you want to do the 'alteration of communication' between the golems.
What you could do is make the golems ' blind ' and react on the movement of Marika. The faster she runs the more noise she makes and that could alarm the golems. You could turn it into a stealth operation.
The player can sneak up on one of the golems and when she is really close you could enable a point/click action on the golem to turn it off or to scramble its communication with others.
Just an idea. I don't know what direction you want to explore with this sound. But either way it is a cool feature!

I just now started real work on implementing my ideas for NPC <-> NPC communication. So far I've only hit one snag--a bug with my Bresenham routine. I'm sure by the time I'm done some things will be different than in the original outline--once I encounter problems to have to solve and modify the ideas to solve them. But here's an example I'm working on.

I have a working AIPawn and I made a second NPC "brain": AICamera. AICamera is very simple. If it sees Marika (or any other target I want it to detect), it raises an alarm. Otherwise, it scans.

The fun part is that the "camera" points in one direction and has limited field of view. The "watcher golem" in my test room has a +- 45 degree vertical field of view and +- 60 degree horizontal field of view. This means it can't see anything in the safe area I highlighted below.


If the "watcher golem" sees Marika, it shouts a message: "FOUNDPLAYER, [gx, gy, gz] (Marika coordinates)".

A good analogy to what this means would be to think of the Stencyl blocks "For all actors in scene, Trigger event ### in all behaviors for actor". If a behavior happened to have the event, something would happen. Otherwise, the behavior would do nothing.

Likewise, the 'brain' AIPawn is programmed to listen to the message FOUNDPLAYER, and knows how to respond to it. Another AICamera, though, would ignore it because it doesn't know that command.

Another component to this system is distance. These messages are treated like sounds, where they are quieter and more distorted the further the source is from the listener.  When I have an NPC shout a message, it's only received by other NPCs that are close enough to hear it.

Going back to the cameras. I'm thinking about another NPC type that primarily relays messages. This means you might see NPCs run away as soon as you encounter them. What they're doing is running to a waypoint to deliver a message: "PLAYER IN SECTOR ####". I can also have fixed NPCs that simply re-shout any message they hear, thus extending the range of messages.

So here's a fun Metroidvania level design opportunity... how can Marika destroy the camera in that room? She needs some kind of projectile weapon. Of course, she can sneak around in the safe area... unless she wants to go through that door in the south wall.

I hadn't thought about sneaking up on the golems, but that's a really interesting idea.... reprogram them :D. Right now you can stun them, which basically just temporarily disables the thought( ) routine for a few ticks. It'd be neat if you could strike parts off of them and they have to recognize the damage and adjust their actions accordingly... but I'm not planning to go that far. I have to draw the line somewhere or I'll never finish the game.