Arctic Edge devlog #6: enemy AI (part IV)

This is the sixth post on a series about the development of Arctic Edge, a stealth-action game I’m working on. If you missed the previous post click here or here for the whole series.

With this post we reach the last post dedicated to enemy AI – probably. This time I want to focus on one specific character behaviour: the search. When the player alerts the enemies but these lose track of where the player is, they have to search the area and try to find him. This might sound simple but it is not. Where should the enemy search? What’s route will he follow? How do we detect obstacles?

Defining the search area

This is clearly the easiest part. As soon as the search behaviour is triggered in the FSM we look in which tile is the enemy located. With enemy coordinates xy we do a simple calculation and we find the exact tile. E.g. with 64×64 tiles we would do tile_x = floor(enemy_x / 64) * 64. Same for the y component.

Once we have the tile we decide a radius, e.g. 3 tiles around, and we get the position of those tiles. Knowing the tile size it’s pretty easy.

Cleaning the area

The process done in the previous step has given us an area to search. The problem is that we don’t know how to move around and even worse than that, the area is not clean. By that I mean that we don’t know which of those tiles are walkable and which aren’t. There might be walls in between but we don’t know.

To the rescue it comes the most unexpected (to me) but at the same time logical solution. The algorithm that is used in software like Photoshop or Paint to fill an are with a color, the flood fill algorithm. If you think about it it actually makes a lot of sense. You need to know which tiles of the selected area are available to you, meaning that they are connected to the tile the enemy is on and that they are empty of obstacles. That’s exactly what happens when you use the bucket fill tool in paint software. You click on a white pixel with the bucket and the blue color selected. Any pixel connected to that one will be changed to blue as long as they are white. The exact same thing we need.

We run this simple algorithm through the array of tiles we had from the previous step and we will have a clean area that must be search by the enemy.

Finding a path

The last thing left to do is to find a path to move around those selection of tiles. Because of how the flood fill algorithm works, the result array of tiles has a pretty well defined flow. The problem is that if we move from tile to tile it looks kind of silly and weird.

The solution I came for is to start by the first tile and loop over the following tiles checking the distance. If the distance is higher than the one to the previous tile, we keep going. If it’s a smaller distance we’ve found one of our key points. We repeat this process until we reach the end of array.

Red borders indicate all available tiles that define the search area.
Shaded yellow are all the connected tiles that must be searched.
The circled numbers indicate the search points and their order.

By doing this process we guarantee that we will reach all the extremes of the selected area. Through testing this has proven to be more than reasonable and in no case there are tiles that are unexplored. Better than that, the movement through the search areas feels reasonable and logical, giving the feel to the player that the enemies area looking around for him.

I’ve tried a bunch of other things before I reached this solution and nothing worked as I wanted. I could have never imagined that an algorithm that is on first sight so disconnected from game development could work, but it does. I guess there’s a lesson to be learned from here.

Anyway, I thought it was an interesting thing to talk about and a good way to end this section of the devlog dedicated to enemy AI. Let’s see how the game evolves in the following weeks and what else can we talk about. I want to talk at some point about the more artistic/visual part of the game. We’ll see.


Arctic Edge devlog #5: enemy AI (part III)

This is the fifth post on a series about the development of Arctic Edge, a stealth-action game I’m working on. If you missed the previous post click here or here for the whole series.

Enemy behaviour

In the previous two posts I’ve been talking about how the enemies can detect the player throw a simulation of senses (sight, hearing, and “presence”). Those senses are one part of the AI simulation as they detect inputs that can translate into behaviour changes.

In this post I’m going to dive into that other part, the behaviour. To define those behaviours and control the changes between them I’m using a finite state machine (or at least my interpretation of it).

finite-state machine (FSM) or finite-state automaton (FSA, plural: automata), finite automaton, or simply a state machine, is a mathematical model of computation. It is an abstract machine that can be in exactly one of a finite number of states at any given time. The FSM can change from one state to another in response to some inputs; the change from one state to another is called a transition.[1] An FSM is defined by a list of its states, its initial state, and the inputs that trigger each transition.

From Wikipedia:

From that definition we can get that there’s only one state being executed at a time, in Arctic Edge’s case could be something like patrol an area, guarding a post or similar. Then we have inputs, for example sight, that can trigger a change of state. Another option is the state having an exit: the state has a purpose and when that purpose is accomplished it transitions to a predefined state. The state machine will “listen” to these changes and make a transition from one state to another when required.

A look into the enemy AI

Let’s take a deep dive into an event or action and the behaviour sequence that is triggered:

  • An enemy is in the patrol state. There are 3 points in space and he moves from 1 to 2, then to 3 and finally back to one restarting the whole cycle.
  • The player passes running close to the enemy. He passes on his back so the enemy can’t see him. At the same time the player is not close enough for the “presence” sense to get triggered but he is in range of his hearing. The hearing sense is triggered by the noise of the steps and changes the patrol state to the suspicious state.
  • The state machine makes the transition to the new state. In the suspicious state the first thing we do is set up a very short timer, as this is a transition state in itself. Once a t time is passed the enemy will transition to another state: investigate.
  • While the timer is running the state machine executes the basic behaviour of suspicious: look towards the source of the noise while maintaining his position.
  • Time has passed and the state machine “exits” the suspicious state and enters the investagate state.
  • The new state makes the enemy move towards the area where he heard the sound. We recorded this location when the hearing sense was triggered.
  • Once the enemy reaches the location an state exit is triggered. The next state will be look around.
  • In the look around state, well, the enemy looks around. This is again a timed state. In this case the enemy will look to different directions trying to find out what he heard. There are to possible outcomes, he sees the player and then trigger a new state (combat) or he does not see the player and triggers the recover state. Let’s make it the later.
  • In the recover state we move back to the original position, recorded at the moment we broke out of the patrol state.
  • Once reach the position we “exit” to the patrol state and continue with the standard behaviour.

State machines

What I listed above is one of the possible chains of states that take part on the enemy AI. It’s a balanced act between inputs (the senses, getting shot) and state behaviours, with state exits (investigate after being suspicious, look around after investigate) and the initial state also playing a role in defining the overall behaviour of the enemies.

It’s a pretty interesting topic although a pretty complicated one. I feel like enemy AI can be one of the most complex things to code in a game (not putting engine development in that box) and one that I’m just starting to get into it.

I think using a state machine is the best solution at the moment for my game. I’m trying to keep the scope of the game small and with a limited number of states I get the behaviour I’m looking for. Maybe in the future I will be able to explore other techniques like behaviour trees (seems to be the standard nowadays) but that will be with another game, not Arctic Edge.

On the next post I will talk about one specific state: search. It took me a while to figure it out and the solution is quite interesting. I think that will be the last time on this series talking about the AI and I will move on to some other part of development. Until then, take care.


Arctic Edge devlog #4: enemy AI (part II)

This is the fourth post on a series about the development of Arctic Edge, a stealth-action game I’m working on. If you missed the previous post click here or here for the whole series.

In my previous post I talked about how I solved the issue of enemies listening to noises created by the player. More important even than that hearing sense is the sight. Enemies must be able to see when the player is in front of them and react accordingly.

Seeing the player

For the enemies to see the player and react to its presence I have decided to use a sight cone, “trick” used in multiple games like the Commandos saga, Mark of the Ninja and many more.

In Commandos you were able to select an enemy and display the area that they were seeing.

The easiest way I’ve found to do this is to generate a triangle of lenght L and width W in the direction where the enemy is looking and then check collisions with the player’s bounding box. Just creating the triangle has been a bit of a pain for me as I’m not that good with math (I’m truly awful). Thankfully internet is full of websites with great resources like this one. Following that recipe I managed to rotate the triangle around the enemy and check the collisions with the player’s object. If there’s no collision between the player and the triangle the enemy continues with its current state. But if there’s a collision with the player we start a new set of checks.

Now that we have determined that the player is intersection or inside the vision cone, we need to see if there’s anything between the enemy and it’s prey. – Just to be clear, at this stage we still haven’t changed the enemy’s state. – Game Maker Studio 2 doesn’t have a raytrace function (I might be wrong) but it has a series of collision detection techniques that will come handy. We determine a series of points in the players object and we trace lines from the enemies position (or it’s head) to those points. Those lines are collision lines and we check if there is collision not with the player but with a solid object. If the function returns true it means that there is a solid object between the enemy and the player and therefore the enemy is not seeing the player.

As I said before, I trace more than one line between the enemy and the player. The reason behind this is that checking, for example, from the enemy’s position to the player’s position is very limited. I that case, Both legs of the player could be visible for the enemy but because we would be checking with the center of the object the enemy would not now. To have a better system I’m currently using the four corners of the bounding box of the player object. That gives me a much better idea of what the enemy is actually seeing, though I have already decided that I will increase the number of points the enemy checks, adding at least another one at the center of the player object.

Once we’ve done all this checks (remember, 4 different collision lines), we count them. If the number of lines that have returned negative (no collision with solid objects, they reach the player) are more than 2 then we change the enemy state to combat.

Is that the player?

Our sight is not perfect. Not everything we see is perfectly clear. There are factors like the distance between us and objects and objects that are in our periphery, Things that we see but more as a shape than a clear object or entity. To implement this in the game I have added a second sight cone (or triangle), repeating the exact same process I have described for the primary sight cone.

By adding a secondary cone we can add another state for the enemy, one in which he gets suspicious and goes to investigate. It’s an intermediate state that is between idle (patrol, guard a position, etc) and straight up combat. Having this can give way to interesting situations where you have been spotted by the enemy but have chance to escape without engaging in combat or even make the enemies suspicious on purpose to take them away from the place they are guarding.

For me, since the moment I implemented this second sight cone (and the associated behaviours), I started to feel like the game was finally shaping up into something playable and engaging.

Letting the player know what’s going on

All this collision checking I’ve been talking about is something that is “under the hood”. The player can see the results of this interactions but that is not enough. For example, in MGSV when an enemy spots you, even if it is from far, the game immediately lets you know that there is a menace and where is it. To do so it uses a series of visual and audio clues, including an arrow pointing in the direction of the enemy that has seen you, animations on the enemy that makes it clear that they are suspicious, etc.

Obviously I can’t do as much as that game did, but I can take advantage of the 2d, top-down nature of my game to make it even more obvious and using less resources. As you might have seen on the Commandos screenshot at the beginning of this post, painting the sight cone of the enemies is an option when your game camera is not that close to the main character. And that is exactly what I’ve done.

To do so I had to go back to look onto the raytraicing issue as I need to “cut” the cone when there are walls intersecting with it. Again, thanks to the internet gods I found a simple way of doing so combining a couple of built-in functions of GMS2. Because there are a couple of steps involved in the “painting” of this cone I’m just going to list them:

  1. Trace a ray from the point of origin of the triangle to one of the other vertexes. This is how we do it:
    1. Move 1px along the line between the 2 vertex
    2. Check for collision with solid objects at this point
    3. If there is a collision, update the vertex of the section and exit the loop. Repeat if there’s no collision.
    4. If we reach the end of the line, end the loop
  2. Once we have determined that side of the sub-triangle, we rotate that line from the origin n degrees.
  3. Repeat point 1 for the second line.
  4. Paint the sub-triangle with the information gathered.
  5. Set the second line as line one of the next triangle.
  6. Repeat from point 2 until the sum of the angles of all sub-triangles are equal to the angle of the collision triangle.

This process sounds complicated but it didn’t take me that long to get it done. Once I had it figured it out, I managed to paint the 2 sight cones – sight and peripheral sight – with ease. I’m keeping the number of sub-triangles low o avoid performance issues. I still have to fine tune it to get the best results without compromising in performance but I’m happy with how it works.

Look behind you! A three-headed monkey!

I’m sure more than once you have felt a presence behind you and you turn and someone is there (hopefully a friendly face). Well, that “sixth sense” of presence is something that also plays a role in a stealth game and, of course, I felt like I had to add it to my game. Thankfully it was a much easier task than the one I have just described.

To have this presence sense implemented I only had to check collisions with the player around the enemy with a small radius. Once detected the enemy takes a few steps back in shock looking at the player and after a couple tenths of second the state changes to combat. Sweet and simple.

Coming up with all this implementations has been both a frustrating and fun experience. I get easily get desperate when math come into play, making me feel dumb and boosting my impostor syndrome to the limit. But at the same time there is a great feel of accomplishment when you manage to push through and find a solution for the task you are working on.

There’s much more work I’ve done in the past few weeks. I’ve coded a big part of the enemy states, I’ve have my main character moving around, shooting, throwing rocks and a whole lot of other features. But I’ll leave that for another day.


Arctic Edge devlog #3: enemy AI (part I)

This is the third post on a series about the development of Arctic Edge, a stealth-action game I’m working on. If you missed the previous post click here or here for the whole series.

Listening to the player

When I started thinking about how could the enemies “listen” to the player while running I went for the quickest, simpler solution: Detect the collision with the player inside a circle of r radius around the enemy. If the player is running alert the enemy.

That definitely works, but only allows me to detect the player moving, nothing else. If I want to detect the player shooting I will have to add some other conditional. And the same for other noise producing actions. Besides being a messy and limited solution, it’s also kinda lazy.

To fix this I came up with something that seems to be working quite well. I’ve created a new object: the noise object. The enemies detect instances of this object and act accordingly changing the estate of the enemy.

The noise object is very simple. It has two important variables: a radius and a duration. Every time the object is instantiated (in position xy, for example wherever the player took his last step), the object grows linearly until it gets to the desired radius and it does so in the defined time (duration). After reaching the full size, the object destroys itself.

The noise object in action. It’s a bit difficult to see due to the speed and quality of the video but each noise object appears as a white circle, grows to it’s maximum size and disappears.
The circle won’t be shown in game, it’s only for debugging purposes.

This approach is very flexible and helps me in many ways. For example, when the player is running I can create the noise object only when I’m actually playing the step sound (not implemented yet, but still). It also allows me to create different intensity (radius and duration) steps depending on the material the players steps on. This can also be applied to the gunshots. Not to mention that I’m not limited by the user position to create this noises. I could make an explosion that alerts everyone around or throw a rock away from me to distract an enemy that is approaching.

I’m aware that I’m not discovering anything new but it’s been fun to come up with this solution and prototype it into the game.


Arctic Edge devlog #2: entity

This is the second post on a series about the development of Arctic Edge, a stealth-action game I’m working on. If you missed the previous post click here or here for the whole series.

The entity or parent entity is the base I’m using for each of the game actors like the player, enemies or bullets. It has a series of properties and mechanisms that are common to all of this actors. For example all entities have a speed in the X axis and one in the Y axis. All entities also have a max speed and an acceleration.

Besides common variables I also have common behaviour that is shared between entities. I’m placing this common behaviour in the step event of the parent entity and I leave the begin step and end step events empty to be used on the child entities. The way it works is as it follows:

  • Begin step event: in each of the child entities it is used to define input. That means that in the player entity I take care of user input while in the enemies a estate machine define which action will be performed on that frame.
  • Step event: All entities have a speed, direction, etc. and this event makes use of them to move the entity around, detect collisions, etc. This event is defined in the parent entity object.
  • End step event: As of now this event is also used exclusively on the child entities. It’s reserved mainly for managing animation and sfx but it can be also used for other tasks.

The step event is equally divided into three sections or blocks of code that are executed in the following order:

  • Pre-movement: uses the forces defined in the begin step event to determine the speed of the object in both axis.
  • Move Y: Moves and handles collisions in the Y axis
  • Move X: Moves and handles collisions in the X axis

This is how it works as of now. Maybe I will have to add some more stuff in the future but right now I’m keeping it as simple as I can.

As you see it’s a pretty simple system but it allows me to have my code organized easily and delegate common actions into a parent entity. Is this the best approach? I honestly don’t know but so far seems to be doing what I need. In the next post I will dive deep into the enemy estate machine and how I’m implementing its behaviour.


Arctic Edge devlog #1: prototyping

I’ve started working on my next game. A top down action stealth game that I’m calling Arctic Edge. I was playing around with Pico-8 – the fun but somewhat complicated virtual console – and I came up with a little concept for the game.

I soon saw that things where getting a little complicated and I moved the project to Game Maker Studio 2. Even with the change I’m going to try really hard to keep the scope as small as it was in its original Pico-8 version.

The game puts you in the place of a black ops agent that has to infiltrate a pharmaceutical complex in the north of Sweden. The lab has been taken by a terrorist group for unknown reasons. Special ops, snowy environment and terrorists, all very Metal Gear Solid.

More than anything else, I’m excited about doing some work on the enemy AI. I hope I can do a decent job and create situations that are interesting for the player. I’m not going to anything to complicate though. As I said, I want to keep the scope small.

I’m going to be releasing more of this devlogs as I make progress on development. I think it can be very useful for me to be aware of what I right and wrong. Let’s see how it goes 🙂