Posted on May 22, 2017
Void Quest post-mortem
Over the two weeks from May 5 to May 19, I worked on an entry for Adventure Jam 2017. Leary of taking two weeks off from working on Cascade Quest, I decided I could use the time to:
- Explore the backstory for one of the characters in Cascade Quest, and maybe unblock some creative hurdles
- Use this opportunity to get some feedback on the input system and style of interactions in Cascade Quest
- Flesh out some of the workflow issues and weak points I have with my engine.
To this effect, I used the same engine (built on top of Unity) that I use for CQ. Over the course of the jam, I made a bunch of improvements (or maybe let’s call them changes) to the engine to support new features.
Approach
I decided to do pretty much all the coding and puzzles first, and the art last. That turned out to be a good way (for me at least). I find art to be more predictable in terms of the time taken. This does, however, mean that for the first 10 days of the jam the game didn’t really feel like a game. Everything was so ugly to look at that it takes a lot of imagination to envision what the end the product will be. Luckily for a short project like this, motivation doesn’t really wane anyway though.
Shortcuts
I took a lot of shortcuts in order to get the amount of content in that I wanted. A few that I can recall:
- The player can climb over a fence as a quick way to another room. Instead of animating this, I just wipe to black and wipe to the new screen.
- There’s a pickup truck whose engine needs to be started. To avoid having to draw door animations, I placed it so the driver side door was behind the truck as seen from the viewer (and I think I claimed it was missing, to avoid having to deal with “open/close” interactions).
- There’s a scene where you bury something. Again, instead of animating this, I fade to black and just do sound effects.
- A number of objects you have to pick up have no visible onscreen representation. They are hidden in holes, in vehicles, and so on.
- I use a very constrained view of things, so that, for example, you never see your full cabin from the outside.
Issues with the parser and object interactions
I plan to do a post about this specific to Cascade Quest soon, but some of the suspicions I had about flaws in the way I’m doing things became more real.
I am trying to come up with a clean and easy way to map text parser input to verb-noun-noun. A while ago I changed from this kind of code:
(if (Said 'take/paper') ; give the paper to the player, etc... )
to a more data-driven approach, where “features” are present in the current room, and have a noun associated with them, and then a “doVerb” method that handles various actions on them. Then there is a more generic piece of code that inspects them and figures out the right feature whose doVerb should be called. This more like how a point-and-click game would be structured.
It gets complicated of course, because “use gas can on truck” should map to “use-can-truck”. But so should “fill truck” (if you have the gas can). So features need a way to cleanly customize the way they map text input to verb-noun-noun. And it gets even more complicated when you have multiple objects with the same name. And even more complicated when you have inventory items that also have in-room representations.
This topic is too large for this port-mortem, but I’ll make a separate more detailed post about it.
Improvements to the engine
I made a number of improvements:
- I now support per-pixel palette remapping. This is how I manage to get the glow around the player when they hold the lantern at night. The source sprites and background are all in normal color – but then the global palette is switched to a dark blue one for the nighttime, and then the glow sprite uses a pinkish palette just where it is
- Related to the above, sprites can disable drawing to the visual screen, and simply draw to the “per pixel palette index” screen to cause the palette to change in an area.
- I support post-processing effects (really I should just move everything to the GPU, but…). Originally I was planning to use this to make it rain, but I didn’t end up putting that in. The only place it’s used is for the thing that comes out of the hole during the first nighttime.
Art
Here I’ll just go over the backgrounds in the game, with my own comments on what could be improved (if you’re any kind of artist, I would love feedback in the comments).
A problem with many of my backgrounds in this game is the blankness of the ground. I need to have more texture and more variety (granted, this is the background without the sprites). I’m also not quite satisfied with the fence, it perhaps need to have higher contrast.
I didn’t quite know what to put in the background behind the bushes. Originally it was a cliff, but I ended up changing it to abstract trees, because I thought I could make them look scarier at night.
I do like the moss on the roof.
Next up is the room to the south of this:
I like the framing I have here. The orange leaves and the tree silhouettes. I used jagged sharp edges for the silhouettes to try to give them a creepier look (especially at night!). I drew this one second, and started to be more intentional about this.
I’m not happy with the tire tracks, but my limited palette was perhaps hurting me here. And again, the blank ground.
The burned house, I dunno. I needed to distinguish the wood planks, and that’s the darkest grey I have, so that’s really all I could use.
Next up is the front of your cabin:
Again, I used the orange leaves and menacing silhouettes to frame the scene. I needed some variety for the background – so not just bushes, but cliffs (they were also helpful for the story). I’m reasonably happy with the cliffs. I referenced Pedro Medeiros’ wonderful pixel art tutorials for these. Probably the moss in the cracks could have been done better.
Again, the blank plain grass and the relatively featureless tire tracks.
One thing I’m not happy about is the cabin. I think placing it straight on was not a good idea (although it made the door art easier). It would look more interesting if it were slightly off angle.
Finally, the cabin interior. I did this with less than 24 hours left in the jam. At first I wasn’t quite feeling it, then I got into a nice zone where I was really enjoying the art.
I was a little un-enthused when it looked like this to start with:
This was probably an hour’s worth of work, at least. Then I spent just a few minutes adding some “defects” to the walls and floors to break up the straight lines. Basically just drawing some black lines:
Suddenly, so much better! After doing this, I did a bit of the same to the cabin exteriors in the other backgrounds.
And with all the props:
Playtesting
My goal was to get a playtestable build ready by Wednesday (the 12th day of the jam), but that didn’t happen until Thursday. I managed to get two people to play (in total) about 2.5 hours of the game. This was absolutely essential and totally worth the distractions. I only wish I had gotten more people to play it. Even a simple parser-based game really needs a variety of people to try it to understand how different people will word things differently. As a result, the game is still fairly unpolished when it comes to accepting input. This kind of sucks, as one thing I want Cascade Quest to be good at is avoiding any “guess the word” issues.
WebGL build
I decided to build a WebGL version, since that gets more people playing the game. There were a few issues however. One, which I’ve encountered before, is that Unity puts two backspaces characters into the Input.inputString every time the backspace key is pressed and released. This messed up the text input, obviously.
The other more devious issue was a “compiler bug”. Or rather, some flaw in the pipeline Unity uses to generate c++ from IL, and then javascript from that. The end result was that one particular switch statement just did not work. After an hour of very “slow iteration” debugging (it took about 10 minutes to build the game, upload to a server and test it) with Debug.Log, I finally switched the logic to an if statement and that fixed the issue. I was lucky that the bug revealed itself in a very visual fashion (the sprites were only partially loading) – and it’s kind of scary that a switch statement was just broken. Where else might that be happening? I’ll need to come up with a simple repro project and submit the bug to Unity.
Posted on May 10, 2017
An Overview of Save Games
In the old days, Sierra was brutal when it came to save games. Death was around every corner, and it was up to the player to remember to manually save progress. Although Cascade Quest still has a good amount of death, that kind of unforgiving behavior isn’t really acceptable these days. On top of that, save games must survive patching – another complexity that wasn’t really an issue back in the day.
Surviving patches
Cascade Quest allows saving your game at almost any point. When this is done, the modifiable state of the world is serialized and saved to disk. This includes all global variables, all script variables, and all objects and their properties.
Changes to the game code can easily invalidate save games. Objects may be added or deleted, variables may be added or removed, and so on. Trying to reload an old save game with a changed codebase would create all sorts of problems.
Luckily, the way the game engine is set up, there is a limited amount of information that gets passed from room to room. When a new room is entered, all stateful scripts are unloaded, and then reloaded on demand. So the only state that gets transferred is the set of global variables. The new room is responsible for instantiating all objects that are relevant based on the global state.
And even among the global state, there is only a portion of that that is relevant persisted state – basically a set of global flags and numbers (that indicate if certain events have happened in the game, for instances).
Upon entering a room, this global state can be saved off somewhere. Then when the player saves a game, this data is saved alongside the rest of the save game data. If, when loading a save game, we see that it was for a different version of the game, we can instead load the global state that defined the game when the player entered the current room. The downside is that this puts the player back to when they entered the room (some UI will be needed to explain this, no doubt).
Autosave
This is also a trivial way to implement autosave. Upon changing rooms, we save off the above-mentioned global state as the autosave. This does mean we need to be careful sometimes: there are cases when a player’s death is certain when they enter a room. In those cases, logic is needed at room startup to detect this and disable the autosave. These cases are rare, however.
Summary
Using that limited set of global state doesn’t completely absolve us of thinking about codebase changes affecting save games of course. Patches to the game can’t ever remove any global state – we can only add new flags or variables. So care must still be taken to have the new code correctly interpret the old code’s state. But a set of flags and variables is much easier to manage than an entire heap of instantiated objects and other state.
Posted on May 4, 2017
Text parser autosuggest
The text parser in Cascade Quest is similar to those in the old Sierra games, and decomposes a user’s input into a grammatical tree which is then matched against clauses in the game’s scripts.
This won’t be a post on how the text parser works, but instead on how the autosuggest is implemented. Typing in all your commands can be tedious, but autosuggest makes it a lot simpler.
The basics
The game has a list of words it understands (about 1400 currently). They are tagged with their grammar class (verb, noun, etc…). As the player is typing, we do a prefix match against words in the database. The player can tab to autocomplete the current suggestion, or arrow to desired suggestion. Or they can just use the suggestions to ensure their spelling is correct.
To help avoid poor suggestions, we apply a ranking system to all matching results. Here are some of the heuristics:
- When suggesting completions for the first word in a sentence, we prefer verbs over nouns (a more complex system could analyze the grammar tree, but I don’t do that).
- Words that are in any currently loaded scripts are ranked higher.
- Words that are specifically used in the current room’s script are ranked even higher.
- There is a set of commonly-used preferred words that get a ranking boost. These include common verbs like ‘look’, ‘get’, and so on.
- The most recently-used words get a boost (this does introduce some unpredictability in the results, however).
At some point, I’ll probably have to implement a banned list too. The game understands some “naughty” and/or trademarked terms (or in some cases special tokens that aren’t words) that I don’t want to display in an autosuggest.
Another possibility might be to implement automatic spell-checking (i.e. correcting mistyped words after the fact). These are things that will be narrowed down once more thorough play-testing is done.
Does it spoil the spirit of the game?
When I first implemented this, I was worried about autosuggest giving away secrets in the game. I soon realized, however, that this isn’t a big problem.
Will it give away spoilers to some of the puzzles? When the puzzles are properly-designed, I have yet to see an instance where this is the case. If the autosuggest were spoiling things, that tends to mean you’ve got a “guess the word” puzzle. This is what I’m specifically trying to avoid. The possible interactions and objects around you should hopefully be obvious.
There are some cases when an object on-screen is hidden, and only revealed (and described) when you interact with another object. In that case, yes, the auto-suggest could give some clues. However, with a vocabulary of over 1400 words, it’s going to be difficult to gain much insight from the autosuggest results.
Possible improvements
Currently, the autosuggest is sourced from the compiled resources. These make no distinction between synonyms of words. For instance, all these words are treated equally: acquire, capture, catch, gather, get, grab, obtain, pick, take, want. A number of these are rarely used and can clutter autosuggest results. In this particular case, ‘get’ and ‘take’ belong to our list of preferred words, so they would get boosted.
A better example is perhaps the following synonyms: cliff, hill, knoll, mountain, volcano. In a room with a volcano, the script source code will reference the term ‘volcano’. This is the term I want promoted mostly heavily. However, all these terms are synonymous from the interpreter’s point of view. I simply need the resource compiler to output more information regarding which words are used in source code. Playtesting will determine how much of a problem this clutter ends up actually being.
Posted on April 6, 2017
2d polygon pathfinding.
Welcome to the first post in the dev blog for Cascade Quest! You can expect semi-regular posts about behind-the-scenes implementation details, new artwork, or any other interesting updates. Let’s get started on the first topic: pathfinding!
Character movement
In the original Sierra games (AGI, and early pre-VGA SCI), movement was accomplished by moving the player along straight lines: either with arrow keys, or by click-to-move with the mouse.
The character’s walkable boundaries were defined by a bitmap that indicated which pixels were walkable and which were not.
Later Sierra games (VGA SCI and up) and LucasArts’s SCUMM engine used more intelligent pathfinding to automatically route the player around obstacles. As these games were primarily played with the mouse, this more complicated implementation made sense.
Up to now, the Cascade Quest code had used the old system – only direct lines were supported. It was up to the user to move the player’s character around obstacles manually. This sort of made sense, as the keyboard is the primary input mechanism in this parser-based game.
There are problems with this though. During cutscenes, it is necessary to move the player’s character (or NPCs) to specific locations. To do so, I either need to disable collision checking for the actors, or be absolutely certain they are in a good position to reach the destination without bumping into something. Both these things are error-prone. They can result in bugs where the actor never reaches the destination and cutscene is blocked – or, in the best case, I detect this and continue the cutscene in a visually broken manner where the actor is not in a location that makes sense.
Proper pathfinding
So I decided I needed to implement proper pathfinding. While back during the early Sierra days this may have been quite a magical feat (in fact, Sierra has a patent on the algorithm they use), today it’s a commonplace “solved” problem. Make a graph that connects adjacent walkable areas and run an A* algorithm over it.
A* is the easy part. Defining the walkable areas and the connections between them is the more challenging bit. Navmeshes are commonly used in 3d (or pseudo-3d) games. The latest version of Unity that I’m using supports runtime navmesh generation (and pathfinding on that navmesh), but it isn’t really suitable for me. It still requires 3d geometry (meshes or physics colliders). My source material is simply a 2d image with polygon boundaries:
A further complication is that Unity’s navmesh generation requires knowing the agent size (the radius of the character moving through the environment). So the agent’s “footprint” must be a circle. In Cascade Quest though, the characters’ footprints are wide and narrow (generally only 2 pixels high).
So basically I would rather just take my simple polygonal shapes (a collection of “barred” and “allowable” non-convex polygons) and generate a connectivity graph. One possibility is to generate a traditional navmesh by decomposing the polygon into triangles.
Another possibility is to use the concave vertices of a polygon. David Gouveia has a good article on this. Basically, by determining the interior line-of-sight connections between the concave vertices, you have all you need for pathfinding. Your start and end points (as long as they are within bounds) are guaranteed to connect to one of the concave vertices.
Here’s what this would look like for the screenshot above:
From there it’s a simple matter of determining the connection between the start/end points and the concave vertices, and then running A* over the graph nodes.
Implementation
My background image editor (and engine) support three types of polygons:
- Polygons that dictate where actors can be
- Polygons that dictate where actors can’t be (these may appear inside the above polygons – for example the tree and sign-in box in the screenshots).
- Polygons where an actor can be only if the start or end point is within them (used to navigate around dangerous areas (or areas where something might be accidentally triggered)).
For the purposes of the algorithm, type 3 can be considered like type 2 if the actor is outside of them (otherwise they are ignored).
So we basically have “yes, you can be here” and “no, you can’t be here” polygons. What’s more is that these can intersect. I could force a restriction that they don’t intersect, but sometimes I dynamically generate polygons in game (for obstacles that come and go, such as doors). It’s easier if I allow for intersection. However, the algorithms discussed above don’t work if the polygons intersect. Luckily, there is an easy-to-use polygon clipping library available.
With Clipper, I can ensure I have a bunch of non-intersecting polygons, from which I can then apply the above algorithms.
Here’s a gif of it in action:
Robustness
These algorithms generally function on floating point vertices. My game world deals with fairly low resolution integer values, however.
The “Pathfinding on a 2d Polygonal Map” article linked above mentions adding a tolerance to some of the calculations, but this really isn’t sufficient if your values are going to be rounded to integers and fed back into the algorithm.
In addition to paths determined by the method in this article, I also need to support traditional movement – that is, just moving in a particular direction. In this case, instead of finding a path from A to B, I need to ask the question “Am I in a valid area?”. I won’t get into the nitty-gritty details, but often a pathfinding result ends up making an actor pass through pixels that fail the “Am I in a valid area” test. Likewise, sometimes pixels that succeed the “Am I in a valid area” test result in the pathfinding algorithm failing because it can’t find a line-of-sight connection to any of the polygon’s concave vertices. To address these issues, I make the following concessions:
- In the game, I don’t run the “Am I in a valid area?” test if the actor is following a path generated by the pathfinding routines.
- When doing pathfinding, I always allow starting in an invalid area – it will generate a valid path that navigates the actor back into a valid area. This prevents the (terrible) scenario where the player would get stuck out of bounds.
- To accomplish the previous bullet point, I need to start from a point in the polygon that is valid. I perform the “Am I in a valid area?” test with a zero tolerance. If it fails, I find the closest polygon edge, and push the point a tiny amount on the other side of that edge. It will then succeed in finding a line-of-sight connection to a concave vertex (necessary for successful pathfinding).
Other notes
With proper pathfinding, I can also allow the player to automatically and reliably navigate to various objects in the scene. For instance, if there’s an obvious and specific object on the ground (say, a wrench), I can automatically navigate the player over to the wrench to pick it up if they type “pick up wrench”. This is arguably better than telling the player they aren’t close enough, and require them to manually navigate there and retype their command. I’ll go over this in more detail in a subsequent post.