We are entering March and with that I’m also entering the 6th month in development of The Kolarian Trials, my next game. Like with any other project of mine, I set up to do something quick and failed miserably at it. Sometimes is lack of free quality time to put into these projects, sometimes is overscoping but most of the time is that I just have no idea of what I’m doing but, you know, in a good way – I’m still learning and game development encompasses many disciplines so it requires a lot of effort.
Anyway, the game – titled The Kolarian Trials – is a boss-rush or, a succession of fights against enemies that resemble what we would usually find in other games at the end of the stage – the “level boss”. The game has been progressing well. I have 4 bosses fully implemented with one more missing polish mechanic-wise and the art/animations. There’s a sixth one in planning stages and that will be probably the last one as I want to put an end at this development and move into something else.
I thought I would share some videos to see how the game has evolved over time:
It’s really cool to see a game grow from the game development equivalent of a blank page to something that is fully playable and – hopefully – enjoyable.
If everything goes according to plan I will release the game by the end of April with 6 levels, an online leaderboard – arcade machine style – and customizable difficulty settings to tailor the experience to your taste. And all of it playable from the web 😀
This is the sixth post on a series about the development of Arctic Edge, a stealth-action game I’m working on. If you missed the previous post click here or here for the whole series.
With this post we reach the last post dedicated to enemy AI – probably. This time I want to focus on one specific character behaviour: the search. When the player alerts the enemies but these lose track of where the player is, they have to search the area and try to find him. This might sound simple but it is not. Where should the enemy search? What’s route will he follow? How do we detect obstacles?
Defining the search area
This is clearly the easiest part. As soon as the search behaviour is triggered in the FSM we look in which tile is the enemy located. With enemy coordinates xy we do a simple calculation and we find the exact tile. E.g. with 64×64 tiles we would do tile_x = floor(enemy_x / 64) * 64. Same for the y component.
Once we have the tile we decide a radius, e.g. 3 tiles around, and we get the position of those tiles. Knowing the tile size it’s pretty easy.
Cleaning the area
The process done in the previous step has given us an area to search. The problem is that we don’t know how to move around and even worse than that, the area is not clean. By that I mean that we don’t know which of those tiles are walkable and which aren’t. There might be walls in between but we don’t know.
To the rescue it comes the most unexpected (to me) but at the same time logical solution. The algorithm that is used in software like Photoshop or Paint to fill an are with a color, the flood fill algorithm. If you think about it it actually makes a lot of sense. You need to know which tiles of the selected area are available to you, meaning that they are connected to the tile the enemy is on and that they are empty of obstacles. That’s exactly what happens when you use the bucket fill tool in paint software. You click on a white pixel with the bucket and the blue color selected. Any pixel connected to that one will be changed to blue as long as they are white. The exact same thing we need.
We run this simple algorithm through the array of tiles we had from the previous step and we will have a clean area that must be search by the enemy.
Finding a path
The last thing left to do is to find a path to move around those selection of tiles. Because of how the flood fill algorithm works, the result array of tiles has a pretty well defined flow. The problem is that if we move from tile to tile it looks kind of silly and weird.
The solution I came for is to start by the first tile and loop over the following tiles checking the distance. If the distance is higher than the one to the previous tile, we keep going. If it’s a smaller distance we’ve found one of our key points. We repeat this process until we reach the end of array.
By doing this process we guarantee that we will reach all the extremes of the selected area. Through testing this has proven to be more than reasonable and in no case there are tiles that are unexplored. Better than that, the movement through the search areas feels reasonable and logical, giving the feel to the player that the enemies area looking around for him.
I’ve tried a bunch of other things before I reached this solution and nothing worked as I wanted. I could have never imagined that an algorithm that is on first sight so disconnected from game development could work, but it does. I guess there’s a lesson to be learned from here.
Anyway, I thought it was an interesting thing to talk about and a good way to end this section of the devlog dedicated to enemy AI. Let’s see how the game evolves in the following weeks and what else can we talk about. I want to talk at some point about the more artistic/visual part of the game. We’ll see.
This is the fifth post on a series about the development of Arctic Edge, a stealth-action game I’m working on. If you missed the previous post click here or here for the whole series.
In the previous two posts I’ve been talking about how the enemies can detect the player throw a simulation of senses (sight, hearing, and “presence”). Those senses are one part of the AI simulation as they detect inputs that can translate into behaviour changes.
In this post I’m going to dive into that other part, the behaviour. To define those behaviours and control the changes between them I’m using a finite state machine (or at least my interpretation of it).
A finite-state machine (FSM) or finite-state automaton (FSA, plural: automata), finite automaton, or simply a state machine, is a mathematical model of computation. It is an abstract machine that can be in exactly one of a finite number of states at any given time. The FSM can change from one state to another in response to some inputs; the change from one state to another is called a transition. An FSM is defined by a list of its states, its initial state, and the inputs that trigger each transition.
From that definition we can get that there’s only one state being executed at a time, in Arctic Edge’s case could be something like patrol an area, guarding a post or similar. Then we have inputs, for example sight, that can trigger a change of state. Another option is the state having an exit: the state has a purpose and when that purpose is accomplished it transitions to a predefined state. The state machine will “listen” to these changes and make a transition from one state to another when required.
A look into the enemy AI
Let’s take a deep dive into an event or action and the behaviour sequence that is triggered:
An enemy is in the patrol state. There are 3 points in space and he moves from 1 to 2, then to 3 and finally back to one restarting the whole cycle.
The player passes running close to the enemy. He passes on his back so the enemy can’t see him. At the same time the player is not close enough for the “presence” sense to get triggered but he is in range of his hearing. The hearing sense is triggered by the noise of the steps and changes the patrol state to the suspicious state.
The state machine makes the transition to the new state. In the suspicious state the first thing we do is set up a very short timer, as this is a transition state in itself. Once a t time is passed the enemy will transition to another state: investigate.
While the timer is running the state machine executes the basic behaviour of suspicious: look towards the source of the noise while maintaining his position.
Time has passed and the state machine “exits” the suspicious state and enters the investagate state.
The new state makes the enemy move towards the area where he heard the sound. We recorded this location when the hearing sense was triggered.
Once the enemy reaches the location an state exit is triggered. The next state will be look around.
In the look around state, well, the enemy looks around. This is again a timed state. In this case the enemy will look to different directions trying to find out what he heard. There are to possible outcomes, he sees the player and then trigger a new state (combat) or he does not see the player and triggers the recover state. Let’s make it the later.
In the recover state we move back to the original position, recorded at the moment we broke out of the patrol state.
Once reach the position we “exit” to the patrol state and continue with the standard behaviour.
What I listed above is one of the possible chains of states that take part on the enemy AI. It’s a balanced act between inputs (the senses, getting shot) and state behaviours, with state exits (investigate after being suspicious, look around after investigate) and the initial state also playing a role in defining the overall behaviour of the enemies.
It’s a pretty interesting topic although a pretty complicated one. I feel like enemy AI can be one of the most complex things to code in a game (not putting engine development in that box) and one that I’m just starting to get into it.
I think using a state machine is the best solution at the moment for my game. I’m trying to keep the scope of the game small and with a limited number of states I get the behaviour I’m looking for. Maybe in the future I will be able to explore other techniques like behaviour trees (seems to be the standard nowadays) but that will be with another game, not Arctic Edge.
On the next post I will talk about one specific state: search. It took me a while to figure it out and the solution is quite interesting. I think that will be the last time on this series talking about the AI and I will move on to some other part of development. Until then, take care.
This is the fourth post on a series about the development of Arctic Edge, a stealth-action game I’m working on. If you missed the previous post click here or here for the whole series.
In my previous post I talked about how I solved the issue of enemies listening to noises created by the player. More important even than that hearing sense is the sight. Enemies must be able to see when the player is in front of them and react accordingly.
Seeing the player
For the enemies to see the player and react to its presence I have decided to use a sight cone, “trick” used in multiple games like the Commandos saga, Mark of the Ninja and many more.
The easiest way I’ve found to do this is to generate a triangle of lenght L and width W in the direction where the enemy is looking and then check collisions with the player’s bounding box. Just creating the triangle has been a bit of a pain for me as I’m not that good with math (I’m truly awful). Thankfully internet is full of websites with great resources like this one. Following that recipe I managed to rotate the triangle around the enemy and check the collisions with the player’s object. If there’s no collision between the player and the triangle the enemy continues with its current state. But if there’s a collision with the player we start a new set of checks.
Now that we have determined that the player is intersection or inside the vision cone, we need to see if there’s anything between the enemy and it’s prey. – Just to be clear, at this stage we still haven’t changed the enemy’s state. – Game Maker Studio 2 doesn’t have a raytrace function (I might be wrong) but it has a series of collision detection techniques that will come handy. We determine a series of points in the players object and we trace lines from the enemies position (or it’s head) to those points. Those lines are collision lines and we check if there is collision not with the player but with a solid object. If the function returns true it means that there is a solid object between the enemy and the player and therefore the enemy is not seeing the player.
As I said before, I trace more than one line between the enemy and the player. The reason behind this is that checking, for example, from the enemy’s position to the player’s position is very limited. I that case, Both legs of the player could be visible for the enemy but because we would be checking with the center of the object the enemy would not now. To have a better system I’m currently using the four corners of the bounding box of the player object. That gives me a much better idea of what the enemy is actually seeing, though I have already decided that I will increase the number of points the enemy checks, adding at least another one at the center of the player object.
Once we’ve done all this checks (remember, 4 different collision lines), we count them. If the number of lines that have returned negative (no collision with solid objects, they reach the player) are more than 2 then we change the enemy state to combat.
Is that the player?
Our sight is not perfect. Not everything we see is perfectly clear. There are factors like the distance between us and objects and objects that are in our periphery, Things that we see but more as a shape than a clear object or entity. To implement this in the game I have added a second sight cone (or triangle), repeating the exact same process I have described for the primary sight cone.
By adding a secondary cone we can add another state for the enemy, one in which he gets suspicious and goes to investigate. It’s an intermediate state that is between idle (patrol, guard a position, etc) and straight up combat. Having this can give way to interesting situations where you have been spotted by the enemy but have chance to escape without engaging in combat or even make the enemies suspicious on purpose to take them away from the place they are guarding.
For me, since the moment I implemented this second sight cone (and the associated behaviours), I started to feel like the game was finally shaping up into something playable and engaging.
Letting the player know what’s going on
All this collision checking I’ve been talking about is something that is “under the hood”. The player can see the results of this interactions but that is not enough. For example, in MGSV when an enemy spots you, even if it is from far, the game immediately lets you know that there is a menace and where is it. To do so it uses a series of visual and audio clues, including an arrow pointing in the direction of the enemy that has seen you, animations on the enemy that makes it clear that they are suspicious, etc.
Obviously I can’t do as much as that game did, but I can take advantage of the 2d, top-down nature of my game to make it even more obvious and using less resources. As you might have seen on the Commandos screenshot at the beginning of this post, painting the sight cone of the enemies is an option when your game camera is not that close to the main character. And that is exactly what I’ve done.
To do so I had to go back to look onto the raytraicing issue as I need to “cut” the cone when there are walls intersecting with it. Again, thanks to the internet gods I found a simple way of doing so combining a couple of built-in functions of GMS2. Because there are a couple of steps involved in the “painting” of this cone I’m just going to list them:
Trace a ray from the point of origin of the triangle to one of the other vertexes. This is how we do it:
Move 1px along the line between the 2 vertex
Check for collision with solid objects at this point
If there is a collision, update the vertex of the section and exit the loop. Repeat if there’s no collision.
If we reach the end of the line, end the loop
Once we have determined that side of the sub-triangle, we rotate that line from the origin n degrees.
Repeat point 1 for the second line.
Paint the sub-triangle with the information gathered.
Set the second line as line one of the next triangle.
Repeat from point 2 until the sum of the angles of all sub-triangles are equal to the angle of the collision triangle.
This process sounds complicated but it didn’t take me that long to get it done. Once I had it figured it out, I managed to paint the 2 sight cones – sight and peripheral sight – with ease. I’m keeping the number of sub-triangles low o avoid performance issues. I still have to fine tune it to get the best results without compromising in performance but I’m happy with how it works.
Look behind you! A three-headed monkey!
I’m sure more than once you have felt a presence behind you and you turn and someone is there (hopefully a friendly face). Well, that “sixth sense” of presence is something that also plays a role in a stealth game and, of course, I felt like I had to add it to my game. Thankfully it was a much easier task than the one I have just described.
To have this presence sense implemented I only had to check collisions with the player around the enemy with a small radius. Once detected the enemy takes a few steps back in shock looking at the player and after a couple tenths of second the state changes to combat. Sweet and simple.
Coming up with all this implementations has been both a frustrating and fun experience. I get easily get desperate when math come into play, making me feel dumb and boosting my impostor syndrome to the limit. But at the same time there is a great feel of accomplishment when you manage to push through and find a solution for the task you are working on.
There’s much more work I’ve done in the past few weeks. I’ve coded a big part of the enemy states, I’ve have my main character moving around, shooting, throwing rocks and a whole lot of other features. But I’ll leave that for another day.
This is the third post on a series about the development of Arctic Edge, a stealth-action game I’m working on. If you missed the previous post click here or here for the whole series.
Listening to the player
When I started thinking about how could the enemies “listen” to the player while running I went for the quickest, simpler solution: Detect the collision with the player inside a circle of r radius around the enemy. If the player is running alert the enemy.
That definitely works, but only allows me to detect the player moving, nothing else. If I want to detect the player shooting I will have to add some other conditional. And the same for other noise producing actions. Besides being a messy and limited solution, it’s also kinda lazy.
To fix this I came up with something that seems to be working quite well. I’ve created a new object: the noise object. The enemies detect instances of this object and act accordingly changing the estate of the enemy.
The noise object is very simple. It has two important variables: a radius and a duration. Every time the object is instantiated (in position xy, for example wherever the player took his last step), the object grows linearly until it gets to the desired radius and it does so in the defined time (duration). After reaching the full size, the object destroys itself.
This approach is very flexible and helps me in many ways. For example, when the player is running I can create the noise object only when I’m actually playing the step sound (not implemented yet, but still). It also allows me to create different intensity (radius and duration) steps depending on the material the players steps on. This can also be applied to the gunshots. Not to mention that I’m not limited by the user position to create this noises. I could make an explosion that alerts everyone around or throw a rock away from me to distract an enemy that is approaching.
I’m aware that I’m not discovering anything new but it’s been fun to come up with this solution and prototype it into the game.
This is the second post on a series about the development of Arctic Edge, a stealth-action game I’m working on. If you missed the previous post click here or here for the whole series.
The entity or parent entity is the base I’m using for each of the game actors like the player, enemies or bullets. It has a series of properties and mechanisms that are common to all of this actors. For example all entities have a speed in the X axis and one in the Y axis. All entities also have a max speed and an acceleration.
Besides common variables I also have common behaviour that is shared between entities. I’m placing this common behaviour in the step event of the parent entity and I leave the begin step and end step events empty to be used on the child entities. The way it works is as it follows:
Begin step event: in each of the child entities it is used to define input. That means that in the player entity I take care of user input while in the enemies a estate machine define which action will be performed on that frame.
Step event: All entities have a speed, direction, etc. and this event makes use of them to move the entity around, detect collisions, etc. This event is defined in the parent entity object.
End step event: As of now this event is also used exclusively on the child entities. It’s reserved mainly for managing animation and sfx but it can be also used for other tasks.
The step event is equally divided into three sections or blocks of code that are executed in the following order:
Pre-movement: uses the forces defined in the begin step event to determine the speed of the object in both axis.
Move Y: Moves and handles collisions in the Y axis
Move X: Moves and handles collisions in the X axis
This is how it works as of now. Maybe I will have to add some more stuff in the future but right now I’m keeping it as simple as I can.
As you see it’s a pretty simple system but it allows me to have my code organized easily and delegate common actions into a parent entity. Is this the best approach? I honestly don’t know but so far seems to be doing what I need. In the next post I will dive deep into the enemy estate machine and how I’m implementing its behaviour.
I’ve started working on my next game. A top down action stealth game that I’m calling Arctic Edge. I was playing around with Pico-8 – the fun but somewhat complicated virtual console – and I came up with a little concept for the game.
I soon saw that things where getting a little complicated and I moved the project to Game Maker Studio 2. Even with the change I’m going to try really hard to keep the scope as small as it was in its original Pico-8 version.
The game puts you in the place of a black ops agent that has to infiltrate a pharmaceutical complex in the north of Sweden. The lab has been taken by a terrorist group for unknown reasons. Special ops, snowy environment and terrorists, all very Metal Gear Solid.
More than anything else, I’m excited about doing some work on the enemy AI. I hope I can do a decent job and create situations that are interesting for the player. I’m not going to anything to complicate though. As I said, I want to keep the scope small.
I’m going to be releasing more of this devlogs as I make progress on development. I think it can be very useful for me to be aware of what I right and wrong. Let’s see how it goes 🙂
Camera and character control are two aspects of games that are closely related. Since the start of third person 3D games, when Sega Saturn and Playstation came to life around 1994, cameras have been a pain for developers. Games like Tomb Raider, Resident Evil, Metal Gear Solid or Super Mario 64 had different approaches that worked on some levels and failed on others. Since then, there has been some advancements on the subject. Resident Evil 4 introduced a new type of camera, closer to the player, that has become an standard for third person shooters (specially thanks to the success of Gears of War). 3D platformers are almost extint, but Nintendo still launches 3D Mario games regularly (thank god) and they have tried different approaches. First, they evolved the camera seen in Super Mario 64. That is, behind the character but keeping some distance and from a higher ground. The other approach they have tried is the one seen on Super Mario 3D Land and, specially, Super Mario 3D World . This camera follows the character, but not from behind the character. They keeping a big distance and height from him, and they only rotate when it makes sense on the context of the level.
My game camera
I’m explaining all this because that last approach is the same I’m following on The Curse of Pargo. I got to this through experimentation. Initially, my intention was to use an isometric view and leave full control of the camera angle to the player. This approach was seen recently in another Nintendo game, Captain Toad Treasure Tracker.
My game has some similarities with Captain Toad’s game, but as soon I implemented it I realized that this camera wouldn’t work. The main reason is that I’m making the game for PC, and rotating the camera with a keyboard sucks and the mouse was out of the picture. Using the free camera was a pain, and as soon as there was any obstacle you were force to rotate a camera that didn’t feel responsive or precise.
After failing with the free camera, I decided to go on and try a camera with fixed angles. The camera was still manual, but the player could rotate between the 4 corners of the level. This approach wasn’t bad, but I felt it was limiting me to work on small levels for it to work properly. I’m not going to do huge levels, but I wanted to do something bigger than what you can see on the video over this lines.
Although I wasn’t happy with the result, I thought the idea of the fixed camera angles could still work doing some changes. The first change was to follow and rotate around the character instead of just the level, keeping always the same distance to the player. That allowed me to work with larger levels. To avoid a “mechanic” look of the camera, I decided to give it some delay towards the movement of the character. I also used a Unity function (Transform.lookAt) that allows for an object (the camera) to look to another (the player). This gave the camera a very smooth feel. Combining this smooth movements with the fixed angles/positions worked pretty well but also created an issue with the character control.
Camera based control
As I said at the beginning, camera and player control are closely related on 3D games. In this game I’ve worked with an 8 directions control, depending on the camera. That means that pressing up will always get you away of the camera, down close, etc. That didn’t work that well with the free camera because of the keyboard, but now, with the camera constantly adjusting its position, it was an absolute mess. To avoid this issue I came up with a simple solution. I could take the angle of the camera and find out which of the fixed positions/angles is currently active. The movement of the player it’s then conditioned by this fixed predefined angle, not the real camera angle. This approach works surprisingly well, and now I feel like the combination of camera and character control are good. The character feels responsive and coherent at all moments, while at the same time the camera system is simple, smooth and works really well.
Right now I’m experimenting with camera triggers. It’s a way of automatize the camera rotation and make it easier for the player. I’m not convinced with this approach 100%, as it’s been seen in a number of games and it always has the same problem: the player doesn’t expect the camera to move and that leads to deaths or annoying situations. I will continue experimenting with it as I create levels for the final game. I believe cameras in games should never get in the middle, so hopefully I’ll make it work.
Below you can have a quick look to the current state of camera and control:
Hi there! I’ve been pretty busy for the past few weeks. After finally killing this, I took some time off of development. Coding is what I do for a living, so sometimes is difficult for me to find the inspiration and the will to go back to coding. It’s easier to relax, watch some Netflix or play a game.
But the thing is that, the few weeks I took off I couldn’t stop thinking about what project to start next. I had a few ideas and I finally decided to work on a puzzle game. This time I wanted to do things a bit better than last time> To do so, I created a little document defining the main aspects of the game. Last time I didn’t do anything like this, so I was working without direction. That’s a really bad idea, specially when you have tendency to lose focus easily like me.
The first thing I did was to do a brief description of the game and what I pretend to achieve with it. I came up with this:
Puzzle-adventure game. It takes elements from Captain Toad, Bomberman and Minecraft. Every level is a puzzle that combines different mechanics like breaking blocks, recollecting materials and keys and fight enemies. It’s a slow-pace game focused on combining the different mechanics. The goal on every level is to reach the final treasure chest.
With this on paper I started to describe the controls of the player:
Movement: 8 direction based control, camera based. That means that the directions of the character will be modified with the movement of the camera. The character moving and turning has to be fast, smooth and precise so the player can react quickly to level events. Though the game won’t have much action there will be some enemies, and elements that can kill the player.
Bomb: bombs are not planted on the ground but thrown slightly forward. Could we thrown further if the player leaves the fire button pressed?
Action: activate buttons or push boxes.
Structure: build and place a structure based on a recipe we collected before. Each recipe needs materials.
I also defined different types of objects and interactions that would be seen through out the game:
Bombs: the player can throw them to break blocks or attack enemies.
Destructible blocks: player can throw a bomb next to a block and destroy them with the explosion.
Bounce pads: Sonic-like platforms that impulse the player up in the air.
Doubloons: like coins in Super Mario or rings in Sonic. Collect 100 to get a life.
Keys: colored keys that open specific doors, like in Doom and other games.
Construction blocks: every time you break a destructible block you’ll be able to collect a construction block. Use it later to construct structure that help you finish the level.
Recipes: recipes will let you construct different structures to solve the puzzles.
Wall and floor buttons: used to activate mobile platforms or open doors.
Boxes: like in a lot of adventure games, you will be able to move boxes to help you reach a part of the level otherwise unreachable or activate floor buttons.
Structures: when the player has acquired a recipe and enough construction blocks, he’ll be able to create structures that help him solve the levels.
Lastly, I wrote some lines regarding tone, setting and visual style:
The game will aim for a relaxed, fun style. The game is not child oriented so there can be swearing and violence. Something along the lines of LucasArts/DoubleFine games.
Very colorful, cartoon-like. I feel something close to Sea of Thieves or Fable Legends characters could work really well. The environment will be pirate-like and probably should use Monkey Island as main inspiration.
As you see, I decided to place the game on a pirate setting. I guess I’ve seen too much Black Sails and too many videos of Sea of Thieves 😛 Anyway, is something I quite like and I thing it’ll fit well with the little story (more like an outline) that I have in mind.
Technology-wise I’m gonna keep working on with Unity. My previous project had a lot of fighting with the engine, but now I feel I know how to solve deal with a lot of things. This time I want to focus more on the design and create a polished gameplay experience, and if I have to start from scratch with another game engine that won’t happen.
Also I’m making the move to Maya LT. This version of the software is aimed to indie devs and, as far as I know, has all the features I might need. On the audio front I’m undecided about what to do. It’s an unexplored area for me, but this time I would like to do something better, make some music, sounds effects, etc. We’ll see how that goes.
On the following weeks I’ll keep talking about the process I’m following with this game. I hope you enjoy this kind of content, let me know what you think!
Note: the main picture is by Glenn Carstens-Peters, see his work here.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.