We’ve had a couple of great iDevBlogADay articles recently from Owen and Frederic about life as a solo indie developer and as a part-time indie developer. I decided for this weeks article to give the perspective of an indie studio and a few considerations versus being a solo (or group of remote partners) based developer.
First up a history lesson!
FDL started in July 2005 as two of us in a small rented office in the centre of Bradford, we started up with our own funds and some work-for-hire projects we were in discussions about. The goal was to build up enough funds and our own technology platform to then grow the company and develop our own games. Bear in mind the development world was a little different then without quite the digital distribution opportunities!
The work-for-hire side of things went on quite a lot longer than we wanted which is a story I hear a lot from both smaller and larger versions of developers like us, once you have money coming in and a good group of guys in the office you get into a cycle based around the (traditionally safe) contract work.
We got to work on a lot of different platforms and technologies however which was great experience for us as a company, covering PS2, PSP, PS3, Wii, DS, Symbian, PC and more as well as working on full MMO systems (front and backend), various interesting procedural techniques and game genres from FPS and racers to puzzle games.
Updating things to modern day we have a large office with 5 full-time programmers (we outsource all other work to mainly local contractors), lots of funky development kit (and some very comfy Aerons) just across the road from that first office we started in. Our work right now though has shifted a lot more to be based around our own projects or revenue share deals with other independent developers. We’re still working on some very cool work-for-hire projects across various platforms, our own work is generally mobile but will be moving across to other platforms very shortly.
So five and a half years later we’re getting to be where we wanted to be generating our own games and IP!
Issues / Considerations
A couple of things to discuss then, I’ll probably expand a bit further on this in a part 2 at some point but these are the things that immediately sprung to mind!
- We’ve got a great team in the office which gives everyone an opportunity to work on every area of each project, communication is quite simple in such a small team and we don’t have to consider the difficulties of being a remote team. The big issue with having full-time staff is of course the running cost, this is why getting into a cycle on contract work is so easy, having gaps between projects when profit margins on a previous contract have been eroded mean that you’re burning into cash reserves. Of course there is a trade off between this and being a solo producer / manager who hires contractors as needed in terms of cost but you lose the ability to build up an actual ‘team’. We’re attempting to grow our core team carefully and still work with external programmers as needed for projects. Costs of staff aren’t just salaries but also perks (free drinks, beers, meals, conference trips, games etc..) plus time for management to make sure things are ok with everyone.
- Offices are expensive, yes there are deals that you can get in some areas and if you don’t need a huge amount of space you can be a bit more careful. With the way we sometimes need to expand and in terms of providing plenty of room per person our office costs are quite high. We also took on an office immediately when we started the company, this is completely at odds with a modern indie studio who would likely be remote / work from coffee shops / one persons house and then when they get their first success (either in sales or contract) get an office. Rent/Rates are one thing but insurance, cleaning and in particular gas and electric costs that would be a shock to people setting up in an office I imagine!
- Development kits
- This is one of the advantages of being an actual established studio with an office, things are becoming a bit more flexible now but we do actually have a secure location to store equipment and the more restrictive hardware manufacturers don’t have issues giving us approval where they would have concerns over people working from home.
- Management overhead
- Paperwork.. Owen mentioned this in his ‘Indie Challenges’ post but a small company with employees is even worse, not to mention accounts and more complex legal matters. The best advise is of course to ensure you use professionals to help with all of this. Personally I still get more involved with all this kind of thing than I should but it is quite satisfying to understand the processes before delegating them too much!
- Larger / more projects
- To cover the burnrate of staff and offices larger projects (or a larger number of projects) are required to pay for it. This requires further management time and introduces extra risks for the company (in terms of projects going wrong or being cancelled due to circumstances out of your control or not). I’m not necessarily talking about contract work here, the ambitions of all of your products needs to be higher than it would be as a solo or remote working developer.
- On the flip side to this of course this does mean that you can be involved in some really interesting big projects that you and the team can get your teeth into.
Things we’ve been enjoying this week
I’ve been doing a fair bit of GL ES coding recently for our upcoming more graphically advanced iPhone titles.
It’s been good fun (on the whole!) even though integrating GL ES with our multi-platform render backend has possibly exposed that our long established flexible render interface doesn’t allow for certain optimisations without introducing certain restrictions (we did encounter the same thing on PS2 but it’s probably a topic for another day).
One nice and simple method for working on optimisation of usage of an API is to use wrapper functions. In terms of graphics API programs such as PIX, gDebugger and other 3D analysis/ripping tools are (for the most part) intercepting the function calls from the application and then interpreting the data themselves in a similar way.
We’ve been doing a similar thing when debugging API usage in the past and it seems a popular technique as I saw John Carmack using a similar simple GL wrapper in his Wolf 3d code (and he had a nice additional idea I’ll mention below).
The idea works by having a private header file in your own middleware or in your game which you include in every file you would use GL or other wrapped API from (or just include it in a global header if you wish).
Inside the header file you’ll have a structure like :-
// Wrap settings, uncomment these to enable.
// helper macros
# define DO_ERR_CHECK_GL(x) CheckGLError(x)
# define DO_ERR_CHECK_GL(x) (void)(x)
// wrapped implementations
// wrapping enabled, implement each function to be wrapped
static inline void _glDepthFunc(GLenum func)
lLogSys(“GLES”, “glDepthFunc(func=%s)\n”, GLenumToString( func ));
// call the actual function
// Use preprocessor to force any references to the unwrapped function to cause a compiler error (idea from Wolf3D code, nice way to catch!)
#define glDepthFunc ERROR_USE_WRAPPED_VERSION_OF_glDepthFunc
// wrapping disabled, use pre-processor to point to actual functions
#define _glDepthFunc glDepthFunc
This implementation is done for every function that you wish to wrap in the particular API.
In the above example we can perform whatever logging we want about each function call, using helper functions we can translate things like enums to human readable strings for output to HTML or other log file. Texture data can be intercepted in glTexImage2D calls and then stored out for future reference in the log (by associating it with the correct texture ref), likewise for shader programs.
Also above we perform a call to an error checking function if the relevant #define setting is set at the top of the file, this CheckGLError function takes the string name of the wrapped function it Is called from, performs a glGetError, checks its validity and then logs / spawns a debugger depending on other current settings.
The possibilities though aren’t limited to that..
There is the ability to add redundant call checking by storing the previous value that was passed to a particular function, in the above case we can track the current internal GL setting for glDepthFunc (this works on the assumption that no other piece of code could somehow set this and break our redundancy checks – in our case we know that all our code is wrapped). Some functions are harder to track redundant state sets for but if you focus on making the code re-useable you’ll find several functions in APIs have similar usage / internal state patterns .
Another feature I added to our internal wrappers was a GL call limit, at the top of each wrapper function a call (again to a function hidden by a #define MACRO meaning it could be easily disabled and compiled out of the code). The function that the macro called would return a bool of whether to continue execution of that wrapped function. This allowed me to have a ‘stop GL functions after xxx have been called’ function, I was able to trace using a simple interface in-game where certain rendering issues where introduced and also look at draw order very easily.
Related to that is overriding of certain states, because you can stop any state being set you can force things like texture sets to not go through (or to keep the texture set to a 1×1 dummy texture to test texture bandwidth impact on your framerate) or perhaps to override the colour of each batch passed to the renderer (I’m ignoring shaders in that example but again you could override a shader if you wished).
Implementing this system wasn’t something that took very long though and it can be very useful depending on what stage of development you’re at. It’s important to spend time working out how to minimise the effort needed for each wrapper function, I think the redundancy checking could be wrapped up nicely through some well thought out template usage and the whole thing could be done via more pre-processor macros to minimise typing errors (and to improve readability).
Hopefully this idea will come in use for your API debugging / logging / optimisation work!
Things we’ve been enjoying this week
- A great post with links to all you need to know on the Kinect ‘hacking’ people have been doing. I got the code compiling on Win32 when it was first being ported to libusb-win32 but haven’t had chance to do much with it since (other than hook it up to our engine). Looking forward to seeing interesting stuff from this, markerless motion capture already starting to emerge.
A slightly technical (but shorter) post tonight as it’s been a busy week of projects, various talks and meetings!
I’ve been working on optimisations on a title we’re finishing up at the moment and matrix multiplies was one area I knew needed optimising.
There have been a couple of developers (links near the bottom of the post) talking about optimising for the VFP and NEON vector processing extensions over the last few years so I was aware that the savings were significant. We’d simply not had to use these optimisations within our own math library code before now.
I’ve also recently heard a bit about Accelerate framework from WWDC so thought I’d have a look at that but my main worry was how calling a library function would avoid function call overhead (at least without fancy linker features removing such overhead).
I thought it would be interesting to do a post looking at rough timings of an operation using the various options we have.
I decided to choose the fairly common 4×4 matrix multiply. As I mentioned these timings are fairly rough, I simply set up loops to perform 100,000 matrix multiplies and (separately from the timed code) ensured results came out the same.
C (direct) is a call to a function that looks a lot like
void lSIMD_Base::Matrix4x4Mul( float * r, const float * a, const float * b )
r = (a*b) + (a*b) + (a*b) + (a*b);
C (indirect)is the same function via an operator* in our matrix class, I wanted to see at the same time whether the temporary matrix and function call were being optimised out on GCC.
VFP is a call to the vfpmathlibrary Matrix4Mul implementation. Note this is a column major matrix mul whereas the others in this rough test are row major.
NEON is code based off a post on Wolfgang Engels comments on his blog
CBLAS is BLAS Accelerate framework in iOS4.0 and above, as you’ll see we’re only going to get a result on OS4.0 and above devices.
The code was compiled using the current 4.2 SDK with the current GCC based Xcode with Thumb disabled and in default release mode (-Os I believe is the default optimisation level).
|Device / OS version
|iPhone 4 (4.1)
|iPhone 3GS (4.0.2)
|iPod v3 (3.1.3)
|iPhone 3G (3.1.2)
|iPod v1 (3.1.2)
I’ll try and remember to come back to update this table as I update OS versions and try new things!
The timings are roughly as you’d expect (though I’m not sure the 3G results should be quite that slow – I think the device is on its way out to be honest!). The Accelerate framework is a bit of a disappointment but this is mainly due to call overhead I believe, the WWDC presentation certainly had much better results for other operations and with larger operations such as a Fast Fourier Transform the call overhead becomes a much smaller % overhead of the operation you’re trying to perform. I need to try out some more things with Accelerate as I’m not sure it should be this slow.
As expected NEON is faster on the ARMv7 chips and VFP is faster on the ARMv6 chips, NEON is 10x faster than the C implementation which is quite impressive.
The chart also acts as quite a nice example of general chip speed, I incorrectly believed the iPhone 4 CPU to be faster than the iPads before seeing these results.
As promised here are some useful links relating to the above
Noel Llopis talking about floating point performance a few years ago
Wolfgang Engel’s original post on the VFP Math Library
The VFP math library itself
I believe this will be the same version here in Oolong as well as NEON implementations based off the comments posts on Wolfgangs blog.
NEON intrinsics guide
Math neon library – extensive math library implementation for NEON (LGPL)
‘iPhone VFP for n00bs’ – also covers some basics of using inline assembly on GCC
A blog at arm.com on matrix multiplication with NEON
Accelerate framework slide from WWDC 2010
- available via iOS developer centre
Things we’ve been enjoying this week
- I think on the launch titles Move is just winning for us but Kinect is interesting and I’m looking forward to seeing what comes out of the Kinect hacking going on now the open source drivers are out.
- 4k demoscene intro with code, should be interesting!
- Working 8-bit CPU in Minecraft
- Related to this blog post, efficient C code for ARM devices
- Awesome resource of 3d models intended for artists to texture and light, should be very nice looking test assets for any tech tests though!
I had been working on a bit more of a technical piece for this week but unfortunately encountered a few problems during testing on it and didn’t get as much time in front of the Mac as I wanted in the evening this week.
Instead I’m going to throw a few of the pros / cons I see with the 99c / 59p price point.
We current have a sale active on two of our apps You Are The Ref down to $1.99 from $2.99 and QuizQuizQuiz down from $1.99 to 99c. YATR is relatively new, being featured by Apple in its football (soccer) games section and Game Center ‘Hot New Games’ and QQQ is now 13 months old but still normally in the top 25 (if not top 10) trivia charts across Europe (and still nowhere in the US chart!!).
- Will generally result in a higher chart position, top 10 is predominantly 59p apps. Being high up the charts (or any particular category chart to a lesser extent) exposes you to the daily new registrations of devices who will instantly look in those places for their first apps.
- Likewise a higher chart position is likely to get the App Store gods to notice you and feature your game – it could be argued however that they probably watch the grossing charts more than the standard ‘sales’ (I’m not sure what the current formula is made up of ratings / sales wise) chart.
- More people buying your game will increase the viral spread of your game assuming those people have a positive experience and it has a ‘show off’ feature that people will be keen to show others (an important part of the viral spread).
- Once you’ve gone to 99c even as a short-term sale people will assume that you’ll drop your price again and perhaps wait till you do so. The only way to perhaps fight this would be to only do it as a launch sale. If you’ve added more content which you believe justifies NOT dropping back down to 99c that’s fine but how do you actually communicate that to people who haven’t bought your game?
- Lower pricing tends to equate to lower ratings especially on apps that aren’t top sellers. As always it’s very important to encourage people to rate the app (especially at good times during gameplay, say just after they’ve unlocked a new level or got a new highscore!).
- The normal criticism about apps pricing themselves at 99c is that the content producer appears to be valuing their content incredibly low, this is based around the suggestion that ‘price sends a signal’ to the consumer (http://www.joelonsoftware.com/items/2005/11/18.html). Of course 99c on the Apple Store isn’t necessarily the same as 99c in the real world or on other stores though – I’ll discuss this a bit more below.
- While talking about value there’s also a risk that being if you’re at a partially successful 99c game you will be compared to Angry Birds and other top selling 99c apps and the value they offer. These apps are selling such huge quantities that they’re also able to easily further increase their value over time through new levels and updates further stretching the expectation for the gamers who only buy 99c apps from the top charts.
- Not making as much money as you could, this is of course a big concern and you’ll never actually know whether you could have made much more money and have had a higher league position at $1.99.
We’re entering (if we’ve not already entered) an interesting time with all these app stores opening (and other digital distribution platforms such as STEAM, PSN/PSP Minis, DSi, WiiWare etc..). Some news articles make a lot of the price differences between various platforms for the same game and of course value for money is a big issue in the current economic climate. For developers however there are differing costs on each platform (ratings cost, development kits, pure porting time via art differences or technical requirements), a different market size and type of demographic. Some platforms also have imposed pricing structures from the powers that be.
Looking at some of the big names across the various platforms an ‘exchange rate’ could be worked out and may be of use to other developers looking at deciding on their price on a particular platform.
As we begin to move QuizQuizQuiz across to various platforms (Windows Phone 7 now at $2.99) we’ll be thinking about this issue a lot more.
Of course in the ideal future we’d have some sort of universal purchase system where users could buy an app / game once and it then runs on every platform, even though someone in the chain is likely to lose out on this (the hardware manufacturers relying on you being tied to your existing paid for apps probably) and of course total spend per user will likely be less. As a consumer it is a very appealing proposition though and we’re already seeing movies move towards multi-platform delivery (DVD/Bluray/Digital copy in the same box at a higher price!)
I’m hoping to talk more about pricing in the future with a few more stats to back things up, thanks for reading and please post any thoughts you have on the 99c price point and how you see the market going.
Things we’ve been enjoying this week
- ‘Radio for YouTube’ – finds related videos to a keyword and plays them Pandora / last.fm style. I may get addicted to this.