knight666/blog

My view on game development

How to flip input handling on its head with action mapping

Suppose you’re working on a space action game called “Actioneroids”, which sounds a bit like something your doctor would prescribe a cream for. You started from scratch and got something on the screen as fast as possible. You wrote some code in C++ to create a window, loaded some ship graphics and now you want to add player input.

For a first test, the player should be able to rotate the ship using the left and right arrow keys and accelerate using the up and down arrow keys. If you like games you may enjoy online casino, but before that you star playing you need to know everything about the
current affairs of gambling, in irishcentral.com you can find all this news.

Your first pass will probably look very similar to this:

void Player::Player(Keyboard* a_Keyboard)
	: m_Keyboard(a_Keyboard)
	, m_Position(glm::vec2(0.0f, 0.0f))
	, m_Velocity(glm::vec2(0.0f, 0.0f))
	, m_Angle(0.0f)
	, m_Speed(0.0f)
	, m_TimeCooldown(0.0f)
{
}

void Player::Tick(float a_DeltaTime)
{
	if (m_Keyboard->IsKeyPressed(VK_LEFT))
	{
		m_Angle += 3.0f * a_DeltaTime;
	}
	if (m_Keyboard->IsKeyPressed(VK_RIGHT))
	{
		m_Angle -= 3.0f * a_DeltaTime;
	}
	if (m_Keyboard->IsKeyPresed(VK_UP))
	{
		m_Speed = glm::clamp(m_Speed + (1.5f * a_DeltaTime), 0.0f, 4.5f);
	}
	if (m_Keyboard->IsKeyPresed(VK_DOWN))
	{
		m_Speed = glm::clamp(m_Speed - (1.5f * a_DeltaTime), 0.0f, 4.5f);
	}

	m_Velocity = glm::vec2(glm::cos(m_Angle), glm::sin(m_Angle)) * m_Speed;
	m_Position += m_Velocity * a_DeltaTime;
	
	if (m_TimeCooldown > 0.0f)
	{
		m_TimeCooldown -= 0.1f * a_DeltaTime;
	}
	if (m_Keyboard->IsKeyPressed(VK_SPACE))
	{
		ShootBullet(m_Position, m_Velocity);
		
		m_TimeCooldown += 3.0f;
	}
}

After running the game, it appears everything works as intended. The player can rotate and move the ship using the arrow keys and fire with the spacebar. You relax in the knowledge of a job well done.

Rebinding keys

One of those pesky designers is at your desk. He says that the player input thus far is fine, but some players prefer using “WASD”. Alright, let’s add that as well:

void Player::Tick(float a_DeltaTime)
{
	if (m_Keyboard->IsKeyPressed(VK_LEFT) || m_Keyboard->IsKeyPressed('A'))
	{
		m_Angle += 3.0f * a_DeltaTime;
	}
	if (m_Keyboard->IsKeyPressed(VK_RIGHT) || m_Keyboard->IsKeyPressed('D'))
	{
		m_Angle -= 3.0f * a_DeltaTime;
	}
	if (m_Keyboard->IsKeyPresed(VK_UP) || m_Keyboard->IsKeyPressed('W'))
	{
		m_Speed = glm::clamp(m_Speed + (1.5f * a_DeltaTime), 0.0f, 4.5f);
	}
	if (m_Keyboard->IsKeyPresed(VK_DOWN) || m_Keyboard->IsKeyPressed('S'))
	{
		m_Speed = glm::clamp(m_Speed - (1.5f * a_DeltaTime), 0.0f, 4.5f);
	}

	m_Velocity = glm::vec2(glm::cos(m_Angle), glm::sin(m_Angle)) * m_Speed;
	m_Position += m_Velocity * a_DeltaTime;
	
	if (m_TimeCooldown > 0.0f)
	{
		m_TimeCooldown -= 0.1f * a_DeltaTime;
	}
	if (m_Keyboard->IsKeyPressed(VK_SPACE))
	{
		ShootBullet(m_Position, m_Velocity);
		
		m_TimeCooldown += 3.0f;
	}
}

Oh, but some players are using weird keyboard layouts like AZERTY and DVORAK, so what we really want is the ability to remap the keys.

Alright, it looks like we have to be a bit more invasive in our refactorings. Before we begin, let’s make a list of the requirements so far:

  • Player input is done using the keyboard.
  • The player ship can be moved using the arrow keys.
  • The player ship can be moved using another combination of keys.
  • All keys should be configurable.

If we spell out the requirements like that, it becomes a bit more obvious what should be done. First, we’ll make a struct that we can use for storing key bindings.

struct KeyBinding
{
	int key_first;
	int key_second;
};

Next, we’ll define a number of these structs for each of our key bindings: left, right, up, down and shoot.

void Player::Player(Keyboard* a_Keyboard)
	: m_Keyboard(a_Keyboard)
	, m_Position(glm::vec2(0.0f, 0.0f))
	, m_Velocity(glm::vec2(0.0f, 0.0f))
	, m_Angle(0.0f)
	, m_Speed(0.0f)
	, m_TimeCooldown(0.0f)
{
	LoadDefaultKeyBindings();
}

void Player::LoadDefaultKeyBindings()
{
	m_BindingLeft.key_first = VK_LEFT;
	m_BindingLeft.key_second = 'A';
	
	m_BindingRight.key_first = VK_RIGHT;
	m_BindingRight.key_second = 'D';
	
	m_BindingUp.key_first = VK_UP;
	m_BindingUp.key_second = 'W';
	
	m_BindingDown.key_first = VK_DOWN;
	m_BindingDown.key_second = 'S';
	
	m_BindingShoot.key_first = VK_SPACE;
	m_BindingShoot.key_second = -1;
}

With a few small changes, our Player class can now support any key binding the players can think of.

void Player::Tick(float a_DeltaTime)
{
	if (m_Keyboard->IsKeyPressed(m_BindingLeft.key_first) || m_Keyboard->IsKeyPressed(m_BindingLeft.key_second))
	{
		m_Angle += 3.0f * a_DeltaTime;
	}
	if (m_Keyboard->IsKeyPressed(m_BindingRight.key_first) || m_Keyboard->IsKeyPressed(m_BindingRight.key_second))
	{
		m_Angle -= 3.0f * a_DeltaTime;
	}
	if (m_Keyboard->IsKeyPressed(m_BindingUp.key_first) || m_Keyboard->IsKeyPressed(m_BindingUp.key_second))
	{
		m_Speed = glm::clamp(m_Speed + (1.5f * a_DeltaTime), 0.0f, 4.5f);
	}
	if (m_Keyboard->IsKeyPressed(m_BindingDown.key_first) || m_Keyboard->IsKeyPressed(m_BindingDown.key_second))
	{
		m_Speed = glm::clamp(m_Speed - (1.5f * a_DeltaTime), 0.0f, 4.5f);
	}
	
	m_Velocity = glm::vec2(glm::cos(m_Angle), glm::sin(m_Angle)) * m_Speed;
	m_Position += m_Velocity * a_DeltaTime;
	
	if (m_TimeCooldown > 0.0f)
	{
		m_TimeCooldown -= 0.1f * a_DeltaTime;
	}
	if (m_Keyboard->IsKeyPressed(m_BindingShoot.key_first) || m_Keyboard->IsKeyPressed(m_BindingShoot.key_second))
	{
		ShootBullet(m_Position, m_Velocity);
		
		m_TimeCooldown += 3.0f;
	}
}

Different input methods

Marketing is at your desk this time. They say that “Actioneroids” is shaping up to be a blockbuster hit, but they’d like to release simultaneously for PC, Xbox 360 and iPhone for which we recommend one of the mobile plans for gaming available at https://www.circles.life/au. So the game will need to support keyboard, controller and touchscreen input methods.

Oh, and the controls will still be bindable, right?

You sputter and fluster. The game wasn’t designed for those input methods! It’s going to be a maintenance nightmare! And the keys have to be bindable?!

But you’ll give it your best shot anyway.

Let’s modify the constructor. We’ll make a new class, an InputHandler, that has handles to all input methods. We’ll pass this class to the Player class.

Player::Player(InputHandler* a_InputHandler)
	: m_InputHandler(a_InputHandler)
	, m_Position(glm::vec2(0.0f, 0.0f))
	, m_Velocity(glm::vec2(0.0f, 0.0f))
	, m_Angle(0.0f)
	, m_Speed(0.0f)
	, m_TimeCooldown(0.0f)
{
}

The Tick method now has to account for all these different input methods, running on all these platforms. It’s… not going to be pretty.

void Player::Tick(float a_DeltaTime)
{
	Keyboard* kb = m_InputHandler->GetKeyboard();
	Gamepad* gp = m_InputHandler->GetGamepad();
	Touchscreen* ts = m_InputHandler->GetTouchscreen();

#if PLATFORM_PC
	if (kb->IsKeyPressed(m_BindingLeft.key_first) || kb->IsKeyPressed(m_BindingLeft.key_second))
#elif PLATFORM_XBOX
	if (gp->IsButtonPressed(GAMEPAD_BUTTON_DPAD_LEFT))
#elif PLATFORM_IPHONE
	if (ts->IsAreaTouched(glm::vec2(20.0f, 20.0f), glm::vec2(120.0f, 120.0f))
#endif
	{
		m_Angle += 3.0f * a_DeltaTime;
	}
	
	// snipped for sanity
}

You feel dirty, but it works. On all platforms at that. Marketing loves it! And the designers too. But they are wondering if you could maybe add controller support for PC as well…?

Action mapping to the rescue

All we’ve done so far is query the state of the different input devices and react to their output. But that assumes the actions are bound in the same way. For example, if you’re using a controller to rotate the ship, you use values between 0 and 1. This allows fine-grained control of the ship’s movement. But on a keyboard, you don’t get a percentage value for a key. You get 0 (nothing) or 1 (maximum). When you don’t take this into account, you can end up with a solution that works great with a controller, but feels awful when using a keyboard and mouse.

So what is action mapping and how does it help?

When using action mapping, you check for an action, but don’t care about the input. In the example, we already have four actions: rotate left, rotate right, accelerate and decelerate. The action mapper takes the name of an action and returns a normalized value as a float. Internally, it queries the different input methods and converts their values to the expected output.

For our action names, we will use strings. But don’t feel constrained. You can use incrementing integers, hashed strings or anything else, as long as it is unique for that action.

static const std::string g_Action_Player_RotateLeft = "Action_Player_RotateLeft";
static const std::string g_Action_Player_RotateRight = "Action_Player_RotateRight";
static const std::string g_Action_Player_Accelerate = "Action_Player_Accelerate";
static const std::string g_Action_Player_Decelerate = "Action_Player_Decelerate";
static const std::string g_Action_Player_Shoot = "Action_Player_Shoot";

Note that some of the actions can be combined. A rotation to the left is negative, while a rotation to the right is positive. So a rotation can be mapped to -1…0…1. The same is true for acceleration.

Now we only need three actions:

static const std::string g_Action_Player_Rotation = "Action_Player_Rotation";
static const std::string g_Action_Player_Acceleration = "Action_Player_Acceleration";
static const std::string g_Action_Player_Shoot = "Action_Player_Shoot";

We’ll put these action names in a header called “ActionNames.h”.

Without looking at the implementation for the action mapper just yet, what will the Player class look like now? A lot simpler:

void Player::Player(ActionMapper* a_ActionMapper)
	: m_ActionMapper(a_ActionMapper)
	, m_Position(glm::vec2(0.0f, 0.0f))
	, m_Velocity(glm::vec2(0.0f, 0.0f))
	, m_Angle(0.0f)
	, m_Speed(0.0f)
	, m_TimeCooldown(0.0f)
{
}

void Player::Tick(float a_DeltaTime)
{
	m_Angle += 3.0f * m_ActionMapper->GetAction(g_Action_Player_Rotation) * a_DeltaTime;
	
	m_Speed += 1.5f * m_ActionMapper->GetAction(g_Action_Player_Acceleration) * a_DeltaTime;
	m_Speed = glm::clamp(m_Speed, 0.0f, 4.5f);
	
	m_Velocity = glm::vec2(glm::cos(m_Angle), glm::sin(m_Angle)) * m_Speed;
	m_Position += m_Velocity * a_DeltaTime;
	
	if (m_TimeCooldown > 0.0f)
	{
		m_TimeCooldown -= 0.1f * a_DeltaTime;
	}
	if (m_ActionMapper->GetAction(g_Action_Player_Shoot) > 0.0f)
	{
		ShootBullet(m_Position);
		
		m_TimeCooldown += 3.0f;
	}
}

Internally, our action mapper will ask each of its handlers: do you recognize this action? If so, what value is it? Only one of the handlers gets to decide the output value, so the order is important.

float ActionMapper::GetAction(const std::string& a_Name) const
{
	float value = 0.0f;
	
	for (std::vector<IInputHandler*>::const_iterator handler_it = m_InputHandlers.begin(); handler_it != m_InputHandlers.end(); ++handler_it)
	{
		IInputHandler* handler = *handler_it;
		
		if (handler->GetAction(a_Name, &value))
		{
			break;
		}
	}
	
	return value;
}

Let’s look at the handler for the keyboard, because it was the first one we added. The implementation for the virtual GetAction method should compare the name of the action to the ones it knows. Some actions may still be platform or input-method specific.

bool KeyboardHandler::GetAction(const std::string& a_Name, float& a_Value)
{
	if (a_Name == g_Action_Player_Rotation)
	{
		if (m_Keyboard->IsKeyPressed(m_BindingLeft.key_first) || m_Keyboard->IsKeyPressed(m_BindingLeft.key_second))
		{
			a_Value = -1.0f;
		}
		else if (m_Keyboard->IsKeyPressed(m_BindingRight.key_first) || m_Keyboard->IsKeyPressed(m_BindingRight.key_second))
		{
			a_Value = 1.0f;
		}
		
		return true;
	}
	else if (a_Name == g_Action_Player_Acceleration)
	{
		if (m_Keyboard->IsKeyPressed(m_BindingUp.key_first) || m_Keyboard->IsKeyPressed(m_BindingUp.key_second))
		{
			a_Value = 1.0f;
		}
		else if (m_Keyboard->IsKeyPressed(m_BindingDown.key_first) || m_Keyboard->IsKeyPressed(m_BindingDown.key_second))
		{
			a_Value = -1.0f;
		}
		
		return true;
	}
	else if (a_Name == g_Action_Player_Shoot)
	{
		if (m_Keyboard->IsKeyPressed(m_BindingShoot.key_first) || m_Keyboard->IsKeyPressed(m_BindingShoot.key_second))
		{
			a_Value = 1.0f;
		}
		
		return true;
	}
	else
	{
		return false;
	}
}

It looks very similar to our earlier incarnation, doesn’t it? Note that even if a button is not pressed, the method returns true. That’s because the return value indicates “hey I know this action!” instead of “the user is doing this action”.

The major advantage is that it is now extremely easy to add a new input method. Simply build a new InputHandler and add it to the action mapper.

It’s not a silver bullet

You know how these posts go. This is a typical “I found a hammer, now everything can be treated as a nail!” post. I’m here to tell you that that is not true. There are distinct and clear disadvantages you must consider before implementing action mapping.

It’s a performance hit

You can’t expect to get the same performance when you’re comparing a string (an action) every frame instead of looking up a boolean (key pressed). It can be mitigated by comparing unique identifiers instead of strings, but you’ll still have to evaluate every incoming action request.

It’s more work

Games have been built and shipped with direct input mapping. It’s not a huge sin to use it. If you only plan to support keyboard and mouse for instance, it’s a lot of wasted effort to abstract that away behind a tree of interfaces.

It’s harder to debug

When you’re doing direct input mapping, it’s easy to set a breakpoint and inspect the value of an input. Was button A pressed? Yes, so says the KeyboardHandler. But when you use action mapping, it’s a lot harder to find your action in a sea of unrelated ones. The best approach is divide-and-conquer: split the GetAction method into multiple submethods, which only expect a small range of actions.

Not all input can be mapped in the same manner

In our game, we could have a guided missile. With the controller, you guide the missile using the right stick. When using mouse and keyboard, the missile homes in on the cursor position. Obviously, these actions cannot be mapped in the same manner. The controller uses a velocity for the missile to steer it, while the mouse sets the position to home in directly.

For these situations, it is often best to have two sets of actions, where each set is implemented by one input method, but ignored by the other.

Conclusion

Even with these downsides, I hope I’ve shown with a clear and concrete example what the benefits are: it’s easier to add new types of input, which means it’s easier to port to other platforms.

Nine-patches and text rendering

I promised I’d post more often and I guess I’m keeping my promise by posting… twice a month!

For the past week-and-a-half I’ve been focusing on the interface of my game, I have been doing research and searching other gaming websites to get more clues on the needs of the game, take a look at the interface of this casino games, they are wildly popular. Because I’m targeting such an old platform, I pretty much have to implement everything myself. But that’s okay, because it’s fun! Although I can’t use it directly, I’m using the Gameplay engine as inspiration and also blatantly steal the bits I like.

The reason I’m putting the focus on my interface is because it looks like this:

I can render text and some buttons, but that’s pretty much it. Moving on, I’ll have to put in some more engine work in order to support all the features I want. For example, I want to be able to load my entire interface from a text file and connect the events (button clicked, text entered, etc.) to my game logic. To learn more about modern games, visit https://farmingless.com/why-are-dota-2-skins-so-expensive/.

Additionally, I want the ability to specify a margin, border and padding for my controls. This is known as the box model and it comes from HTML. It’s really easy to explain too:

  • Everything element on the screen is put into a box.
  • Boxes are placed together as closely as possible without overlapping.
  • Boxes can be placed inside other boxes.
  • The distance between boxes is known as the margin.
  • The inside of the box is known as the content.
  • The edges around the box is known as the border.
  • The distance between the border and the content is known as the padding

Using these simple rules, you can build pretty much any interface you want!

The next thing on my wishlist was the nine-patch. This is a box with a border in which the corners don’t deform when you stretch the content area.

The most difficult part was figuring out how I was going to store all this information. Ultimately I looked at how gameplay does it and copied that. The result:

Terrible programming art aside, it works! Next up, I attacked the text rendering. It’s a small change, but I can now center the text horizontally and vertically:

I also cleaned up the rendering in general. I am now able to draw text with multiple lines (!) and the code is general enough to fit in the base “Engine” project instead of the Windows Mobile 6-specific renderer. Unfortunately, working for yourself means you have to be tough. So the old text rendering code is still in there, but will be removed “eventually”.

This post has gotten too damn long again. I’m doing so much exciting stuff, I need to talk about it more!

Three months of development

On January the 24th I made the first commit. I had decided to finally build a game on my favorite crazy platform: Windows Mobile 6.1. Go take a look at this gaming website.

Windows Mobile is exactly as the name applies: Windows for your phone. And it programs like that too. You create a window with “CreateWindow”, you can render using GDI, DirectDraw or Direct3D and sound is done using MMSYSTEM or DirectSound. Programming for this platform is a lot like programming for Windows.

However, it is a mobile platform so people can play on the go as they do with geeksscan online slots. And it is old. My development device is a HP iPAQ PDA. Remember those? You’d lug them around to keep track of your day-by-day planning and they had wifi so you could check your e-mail. Eventually they would get replaced, first by Blackberries and finally by smartphones in general.

The device has, I estimate, a 300 MHz processor and 64 MB of RAM. It has a 240 x 320 pixel screen without multi-touch. You need to use a supplied pen to get it to actually register a click. And although it doesn’t sound like a beast of a machine, it’s enough to run Age of Empires II and a crappy version of Call of Duty.

So why pick this platform? Because it’s a challenge. I’ve been meaning to make something for it ever since I first got it. I have the skills, all I need now is persistence. Every day, before work, I launch Visual Studio and try to get something done. Last week I worked on level loading. I was using a hardcoded level definition that just didn’t cut it anymore. So I turned to my favorite serialization library: Protobuf.

It was a bit of a pain to setup, but it works! I can use protobuf-lite on Windows Mobile!

So naturally, after finishing the level loader, I converted my font loading to Protobuf too. And my sprites as well…

I got a bit carried away is all I’m sayin’.

My plan is to keep you posted by posting small updates every few days. I have a tendency to write gargantuan blog posts and those take a loooooong time to write. So by keeping it short I hope to increase frequency in posting. ;)

Serialization using Protobuf

For the past few months I’ve been working on a game I’d like to call “Alpha One”, but is still called “Duck Hunt” for now.

Exciting top-down action! Programmer graphics! Unfinished gameplay!

I’ve been working on it in the train, with a mindset of: just get it done. I’m not really bothering with architecture all that much, I just want a working prototype. And that mindset is necessary, because every day I only get an hour and a half, split in two, to work on it.

For the past month I’ve been working on storing the game state to disk. This due to the advice of @ivanassen, who worked on Tropico 4 (and a whole slew of other games). His advice was to start working on the subsystem that stores the game state to disk as soon as possible, because it touches everything.  If you like casino games is important that you look the best Bonuses to make easier for you win money.

What I’ve found is that he’s absolutely right.

So, what do I need to store to disk for Alpha One?

  • Background – My background is divided into layers, each layer containing items. These are static and won’t change throughout the game.
  • Camera’s – Right now I have only two camera’s: one for the game and one for the editor. But I would like to store their position and orientation in the game save as I might add more camera’s later.
  • Lights – I don’t have any right now, but I definitely will in the future.
  • Objects – Everything that’s moving and interacting. Right now I have only three classes: Player, Enemy and Bullet. And even that has proven to be a headache.

XML is terrible for a lot of things, this is one of them

My first idea for storing the game state was to simply write it to XML. This was before I really researched serialization in games. This is what that looked like:

<Level name="Generated">
    <Background>
        <Layer level="0">
            <Item name="Back">
                <Sprite>avatar.png</Sprite>
                <Pivot>0.500000 0.500000</Pivot>
                <Position>0.000000 0.000000</Position>
                <Rotation>0.000000</Rotation>
                <Scale>1.000000</Scale>
            </Item>
        </Layer>
    </Background>
	<Objects>
        <Object type="Player" id="0" owner="-1">
            <Position>320.000000 240.000000</Position>
            <Velocity>0.000000 0.000000</Velocity>
        </Object>
        <Object type="Enemy" id="1" owner="-1">
            <Position>552.673096 360.225830</Position>
            <Velocity>0.000000 0.000000</Velocity>
            <Health>100.000000</Health>
        </Object>
    </Objects>
</Level>

The signal-to-noise ratio here is okay. It’s a lot of fluff around your actual data, but not very troublesome to actually parse. However, this is how I saved my BackgroundItem class to the file:

	bool BackgroundItem::Save(tinyxml2::XMLElement* a_Element)
	{
		tinyxml2::XMLDocument* doc = a_Element->GetDocument();

		tb::String temp(1024);

		tinyxml2::XMLElement* ele_item = doc->NewElement("Item");
		ele_item->SetAttribute("name", m_Name.GetData());

		if (m_Sprite)
		{
			tinyxml2::XMLElement* ele_item_sprite = doc->NewElement("Sprite");
			ele_item_sprite->InsertFirstChild(doc->NewText(m_Sprite->GetName().GetData()));
			ele_item->InsertEndChild(ele_item_sprite);

			tinyxml2::XMLElement* ele_item_pivot = doc->NewElement("Pivot");
			temp.Format("%f %f", m_Pivot.x, m_Pivot.y);
			ele_item_pivot->InsertFirstChild(doc->NewText(temp.GetData()));
			ele_item->InsertEndChild(ele_item_pivot);
		}

		tinyxml2::XMLElement* ele_item_position = doc->NewElement("Position");
		temp.Format("%f %f", m_Position.x, m_Position.y);
		ele_item_position->InsertFirstChild(doc->NewText(temp.GetData()));
		ele_item->InsertEndChild(ele_item_position);

		tinyxml2::XMLElement* ele_item_rotation = doc->NewElement("Rotation");
		temp.Format("%f", m_Rotation);
		ele_item_rotation->InsertFirstChild(doc->NewText(temp.GetData()));
		ele_item->InsertEndChild(ele_item_rotation);

		tinyxml2::XMLElement* ele_item_scale = doc->NewElement("Scale");
		temp.Format("%f", m_Scale);
		ele_item_scale->InsertFirstChild(doc->NewText(temp.GetData()));
		ele_item->InsertEndChild(ele_item_scale);

		a_Element->InsertEndChild(ele_item);

		return true;
	}

It looks bad, it feels bad and it’s very cumbersome to add new variables to this definition. What doesn’t help is that everything uses strings, so I first have to convert my floats to a string before I can store them.

I was also starting to worry about security and performance. TinyXml2 is blazing fast, but my levels would grow in size very quickly. On top of that, storing your game state as plaintext is a bad idea. It’s practically begging to be messed with. However, I didn’t really look too much into these problems, my main concern was just getting it to store my game state to a file.

What I noticed, however, was that every time I made a relatively minor change to my XML, like putting the Object’s id in an attribute instead of a child node, I would have to change massive amounts of code. It was bothering me, but not enough to actually do something about it. But then I wanted to change my Camera’s position from a Vec3 (one value) to a JuicyVar>Vec3< (three values). And that was such a nightmare that I finally set down to research serialization.

So that’s how you serialize your data…

What I found was magnificent. Google has an open source project called Protocol Buffers (Protobuf for short) that they use internally for all their projects.

The basics come down to this: instead of describing what your data is, why not describe what your data looks like?

Alright, an example. This would be a position stored in XML:

<Position>0.0 100.0 -10.0</Position>

Now, this is what it looks like using a Protobuf definition:

position {
	x: 0.0
	y: 100.0
	z: -10.0
}

This looks much cleaner in my opinion. It only specifies the name of the field once and it labels the values.

This would be the code to parse the XML version:

tinyxml2::XMLElement* ele_pos = a_Element->FirstChildElement("Position");
if (ele_pos)
{	
	sscanf(ele_pos->GetText(), "%f %f %f", &m_Position.x, &m_Position.y, &m_Position.z);
}

While this would be the code to parse the protobuf version:

if (a_Element.has_position())
{
	m_Position.x = a_Element.position().x();
	m_Position.y = a_Element.position().y();
	m_Position.z = a_Element.position().z();
}

That’s quite a difference! But how does it work?

The secret is in the sauce

Like I said, a .proto file is nothing but a definition of what your data looks like. Here would be the definition for the above data:

package PbGame;

message Vec3
{
	required float x = 1;
	required float y = 2;
	required float z = 3;
}

message Object
{
	optional Vec3 position = 1;
}

This .proto file is fed to protoc.exe, which converts the file to a header (.pb.h) and implementation (.pb.cc). Now you can include those generated files in your project and use them to parse the data.

Let’s shake our definition up a bit. I don’t want a static position, but a juicy one, which wiggles and wobbles to the target position over time. We’ll need a Vec3 as data, a Vec3 as target and a blend factor. First we’ll add a new message:

message JuicyVec3
{
	required Vec3 data = 1;
	required Vec3 target = 2;
	required float blend = 3;
}

Then we change the Object message:

message Object
{
	optional JuicyVec3 position = 1;
}

What does our parsing code look like now?

if (a_Object.has_position())
{
	tb::Vec3 data;
	tb::Vec3 target;
	float blend;
	
	data.x = a_Element.position().data().x();
	data.y = a_Element.position().data().y();
	data.z = a_Element.position().data().z();
	
	target.x = a_Element.position().target().x();
	target.y = a_Element.position().target().y();
	target.z = a_Element.position().target().z();
	
	blend = a_Element.position().blend();
	
	m_Position.SetData(data);
	m_Position.SetTarget(target);
	m_Position.SetBlend(blend);
}

Still looks pretty nice. Now let’s look in the XML corner:

tinyxml2::XMLElement* ele_pos = a_Object->FirstChildElement("Position");
if (ele_pos)
{	
	tb::Vec3 data;
	tb::Vec3 target;
	float blend;

	sscanf(ele_pos->FirstChildElement("Data")->GetText(), "%f %f %f", &data.x, &data.y, &data.z);
	sscanf(ele_pos->FirstChildElement("Target")->GetText(), "%f %f %f", &target.x, &target.y, &target.z);
	sscanf(ele_pos->FirstChildElement("Blend")->GetText(), "%f", &blend);
	
	m_Position.SetData(data);
	m_Position.SetTarget(target);
	m_Position.SetBlend(blend);
}

Yeah… it eh… didn’t get better.

The main problem with XML is that it’s extremely brittle. If your data doesn’t match up with your definition, you’re pretty much screwed. You have to add a lot of checks to make sure that doesn’t happen. Checks I haven’t even added here.

With protobuffers, a lot of these common annoyances are smoothed away. If you use mutable_target() instead of target(), you are guaranteed to get a pointer to a PbGame::Vec3, even if the message doesn’t have one right now.

Another advantage is that protobuffers can be saved to and loaded from a binary file. That means that you have a text version of your data where you can make changes in and a binary version that you ship with, for speed and safety. This also means that they’re extremely useful for packing data to send over an internet connection. You don’t have to keep a record of what each byte stood for because that’s already in your .proto file!

Conclusion

I really, really like protobuffers. They took a while to get used to, but once they click, I suddenly had a shiny new hammer and everything starts to look like a nail. Now I just need to figure out what the downsides are. Also with my experience in games I recommend that if you like casino games, learn how to know if you can trust a no deposit online casino, to make your experience much better.

Friends don’t let friends generate icosahedrons

A while ago, I did a retake for a course on procedural programming. One of the assignments was to generate a textured sphere. You would be marked on getting the texturing right, but I got distracted and decided to try making an icosahedron instead and looking at other games to get ideas, find out now where you can play online. However, I also made a version that used a more traditional subdivision method: generate circles on a cylinder, and have the radius of the circles depend on the cosine of the distance on the cylinder’s height. Here are my spheres:

Looks pretty round right? However, let’s take a look at their wireframes:

Click on the image for a larger version.

From the thumbnail, the second version looks unchanged. But when you click on it, you’ll notice that the lines are so dense that it looks textured!

What’s the difference?

The sphere on the left (sphere 1) uses the traditional method of sphere generation. It has 31 subdivisions on the y-axis and 31 subdivisions on the z-axis of the half-sphere. This half-sphere is then mirrored to the bottom. It has a total of 3,844 faces.

In pseudocode:

for (y = 0; y < subdivisions_y; y++)
{
	y = cos(degrees_y) * radius_sphere;
	
	radius_y = sin(degrees_y) * radius_sphere;
	
	for (z = 0; z < subdivisions_z; x++)
	{
		x = cos(degrees_z) * radius_y;
		z = sin(degrees_z) * radius_y
	}
}

The sphere on the right (sphere 2) uses an icosahedron subdivision algorithm to generate a sphere. It has a recursive depth of 5 and generates 109,200 faces.

In pseudocode:

// first, generate the 20 triangles of an icosahedron
// then subdivide them:

triangle_curr = triangle_first;
for (int i = 0; i < 20; i++)
{
	subdivide_triangle(curr, recursive_depth);
	triangle_curr = triangle_next;
}

The icosahedron is an almost perfect sphere, but it comes at a high price. It uses a lot more faces to achieve the same effect.

But okay, that’s not really fair. Let’s scale down the quality considerably:

Sphere 1 uses 5 subdivisions on the y-axis and 5 subdivisions on the z-axis, for a total of 100 faces.

Sphere 2 uses 0 recursive subdivisions, for a total of 100 faces.

They use the same amount of faces, but in my opinion, the sphere on the left looks better. It looks less lumpy and a lot more round. Let’s take a look at the amount of faces per quality level.

Icosahedron:

  • Level 0 – 100 faces
  • Level 1 – 420 faces
  • Level 2 – 1,700 faces
  • Level 3 – 6,820 faces
  • Level 4 – 27,300 faces
  • Level 5 – 109,220 faces

Subdivided sphere:

  • YZ: 5 – 100 faces
  • YZ: 10 – 400 faces
  • YZ: 15 – 900 faces
  • YZ: 20 – 1,600 faces
  • YZ: 25 – 2,500 faces
  • YZ: 30 – 3,600 faces

It’s easy to see that the subdivided sphere gives you a lot more bang for your buck. The 30-subdivisions version is comparably in quality to the 5-level recursive icosahedron, but it uses only 3.2% of the faces!

Texturing problems

The truth is: you don’t *need* the precision an icosahedron will give you. Because they both hide a much harder problem: texturing a 2D plane on a 3D sphere. Here’s what the top looks like:

On the top-left, you can see the texture being used. Coincidentally, it’s also being generated procedurally. (Hey, it was a course on procedural generation, right?) It looks terrible, but this is as good as it’s going to get. I got top marks for my texture mapping, because most people don’t even get it this right.

Why is it such a problem to map a texture to a sphere? Well, xkcd explains it better than I can:

Image taken from http://xkcd.net/977/

The way I solved it is by generating polar coordinates from normalized coordinates on the sphere to get the texture u and v. I won’t go into too much detail, because I don’t want to ruin the course material. But I do have a fix for the dateline issue, which took a very long time to figure out. When the texture goes around the sphere, you get to a face that has to wrap around from 1.0 to 0.0. If you don’t fix that, you will get an ugly band.

void ModelSphere::FixDateLine(tb::Vec2& a_Left, tb::Vec2& a_Right)
{
	float tt = 0.75f;
	float nn = 1.f - tt;

	if (tb::Math::Abs(a_Left.x - a_Right.x) > tt) 
	{ 
		if (a_Left.x < nn) { a_Left.x += 1.f; }
		if (a_Right.x < nn) { a_Right.x += 1.f; }
	}
}

It’s not a lot of code, but it isn’t explained properly anywhere else. The same code is used for both the icosahedron and the subdivided sphere, in case you were wondering.

Conclusion

Consider using cosine and sine to generate a sphere. It generates a lot less faces for the same amount of detail. For most games it will be “good enough”. Unless you’re generating planets that really, really need to be round all over its surface, you can get away with a subdivided sphere quite easily. Casino games have been getting very appear everywhere even in movies check this movies you forgot had a casino scene.

seo resell