Procedural.

It’s been a really productive week–lots of code cleanup, which always feels so nice. Since refactoring doesn’t make for very interesting dev blogging, my goal for today was to have something new to show. And with just a few extra hours of work…

Soulcaster3 2014-11-21 17-08-10-33When this room layout engine is more robust, I’ll do a more thorough post explaining how it works. You can tell what’s going on if you stare at it…

Realtime Editor Lighting

When I fix the problem with WP uploading GIFs, I can post this sort of thing directly. In the meantime, here’s the room editor which now updates light sources and obstruction in realtime as the room structure changes:

This week is mostly earmarked for editor work, so I should have some editor features to show pretty soon.

Debug Visualizations & Tile Blending

With each project, I put a bit more time into debugging visualizations. I just try to look for things I spend time debugging a lot (where I step through code) and making sure there’s a quick way to show how things are working.

The most basic is the hitbox visualization, which shows world geometry (walls are red, trenches are yellow) as well as actor hitboxes. The monster hitboxes change color when their AI switches from pathfinding to Gauntlet movement (when they are in line of sight).

This is where you can see a few different layers of the pathfinding AI. The blue blob following the summoner is the vision radius. When monsters step on it, they switch from using pathfinding to directly walking towards the player. The numbers in the top right of each tile are the step values for the path map. They actually represent how many tiles away the target is, in this case the summoner. The numbers in the bottom right of the blue blob are the priority levels for the vision. Because summons also broadcast pathing and vision fields, each tile needs to have a specific target based on distance.

Finally we have the targeting radius. This is built just like the radius above, but it’s not obstructed by trenches. It’s used for the archer to pick targets. In the previous Soulcaster games, NPCs would fire off “line of sight” tracers to find targets. Because of how expensive these were for the CPU, they would have to be restricted to intervals of once or twice per second. This just made the AI slightly less responsive. This system is instantly responsive and by my guess, uses about 10% of the CPU power.

Not a debugging feature, but it’s something I just finished today and think is pretty cool. Since the levels are generated from basic elements, I have to teach the computer to do all the nice tile blending I did by hand in the previous games. It’s a fun challenge. This test just shows 16 different blend tiles which use the cardinal directions. To get rid of the divots in the corners of solid walls, I’ll need to add support for all 8 adjacent tiles. This can be used all over the place, for both natural and artificial architecture.