Debunking Euclideon's Unlimited Detail Tech September 13th, 2016
Uh oh. They're at it again. Yes folks, Euclideon are back with more of their smarmy-voice-over-without-any-detail brand of hype. They call it "Unlimited Detail", but what they don't do is explain how any of it actually works.
If only there were some way we could find out how their idea works. If only... wait! There is!
One of the great things about ideas is that you have two choices; you can either keep it a secret, but then you risk someone else coming up with it too. Or, you can patent it, which grants ownership of the idea. Of course, in order to be granted a patent, you need to actually explain what your idea is and how it works.
With that in mind, it's easy to actually find out how the Euclideon tech works. Off we go to the Australia Patent Office! A quick search for Euclideon reveals a number of documents, but there's one 2012390266, "A computer graphics method for rendering three dimensional scenes" that seems to be the one we need.
It's not an especially exciting read, most patents aren't. I'll summarize the description here:
- The scene is stored as a number of objects.
- Most objects are rendered using the fast orthographic method.
- Objects up close are rendered using the slow perspective method.
But what is the orthographic method, you ask? Well it turns out not to be that complex. Here it is folks. Prepare yourself for the wonder of the Unlimited Detail Engine:
- You store colors in octree cells.
- You walk recursively over this octree and splat each point on screen.
Wait, is that it? Yes my friends, this is the same algorithm described in the 1985 paper "Back to-Front Display of Voxel Based Objects", by Frieder et al. I think Euclideon choose to go front-to-back instead, and use a mask to avoid overdraw, but it's the same thing. They're taking 30 year old technology and passing it off as being next-gen.
What this means is that their data is stored in a pre-built octree. Despite their recent claims, there's no way this can animate like modern games need. The only way you can animate it is if you use stop motion - i.e. have several pre-built octrees and switch between them. And looking at their recent footage, I think that's what they're doing. It all looks kinda... well... jerky?
We can do a little math and run the numbers here. Let's imagine you've got a 3GHz CPU, and you want to render 1000x1000 at 60FPS. That's a million pixels you need to fill in. 3GHz/60=50,000,000 cycles available per frame. Therefore you need to render one pixel in 50 cycles. That's pretty tight. It might be do-able, but then you've just used all of your CPU budget doing it. What about the rest? Do you want anti-aliasing? Lighting? Shadows? Bloom? Depth of field? Well tough, because you've already pegged your CPU out at 100% just filling in the color buffer.
I'm not saying voxel-based games can't work, I think there's definitely a place for less polygony techniques in future. But this isn't it. The trouble with Euclideon is that they spend such a large amount of their time trying to explain that their tech is better than current existing games, when the simple fact is that it isn't. In their latest video they moan about LOD pops in games. I suggest they go take a look at some actual games. I just finished playing through the delightful "Vanishing Of Ethan Carter", and guess what? No LOD pops anywhere in the game. It draws trees off to the horizon and they all just magically morph to lower-detail versions without you ever noticing.
You might be impressed by their up-close dirt rendering, but it's no match for the current round of games and GPUs. Take a look at "Star Wars: Battlefront" here - that's what we're doing right now using just regular GPUs. It's already light years ahead of their tech. Advances like geometry tessellation have taken polygon rendering to new extremes.
This tech, at least the way they're doing it, is dead. They have no real lighting, none. Just look at their images - it's just N-dot-L, which has been prebaked. I spotted a shadow underneath one of the fences, but oh look, it casts directly downwards. Do you know why? It's because if it cast at an angle, it would spill over onto adjacent objects and prevent re-use of instances. You could argue that they could prebake very nice GI lighting, but they can't; the only way they can get their "unlimited detail" is by instancing the same objects several times.
If you want to see some real exciting advances in point-based technology, go look at the upcoming game Dreams by Media Molecule. Those guys are way ahead of Euclideon, and guess what? Their stuff doesn't rely on pre-baked hierarchies, it's all genuinely real-time.
tl;dr -- GPUs get better every year. If you want unlimited detail, just go buy a PS4 today. But please, don't give these hacks any money.
Learning To Wrangle Half-Floats September 10th, 2016
You all know what floating-point arithmetic is, so I won't bore you by covering that. The IEEE standard originally defined two variants, the 32-bit single-precision format and the 64-bit double-precision format.
But that's not all you can do. If you understand the principles behind it, you can make your own floating-point format at any precision. The most popular small-float format is the 16-bit half-precision format. Popularized by Nvidia and ILM, this is supported in hardware by most GPUs.
The half-float format is great because it's good enough for many cases, while only being half the space of the standard 32-bit format. It's not just the space either -- the PS3 GPU, for example, would often run twice as fast when using halfs. (Interestingly enough, this usually wasn't due to the precision difference, but to restrictions on register file access. The smaller data access allowed the compiler to better schedule the instructions.)
There's a downside to this flexibility though. With regular FP, you can usually just throw it in there and not have to worry about precision. That's no longer true for half-floats. Every time you use them you now have to worry about whether it's suitable for the current case. And, as you may discover here, the results can be surprising.
The format is very simple; it's basically the same as the 32-bit version but with less bits:
Sign Exponent Mantissa 1 bit 5 bits 10 bits
The Wikipedia page has a good detailed explanation of it, but the trouble with just running the numbers on it is that you don't really get a good feel for understanding it. We need a better way of grasping the fundamentals.
To get a good visualization of half-float, the most useful property we can use is the fact that they're only 16-bit. This means that there's only 65536 of them. You know, that's not actually that many. So, why not just list them all out? I did just that. That's the great thing about computers today, data doesn't seem as big as it used to be. Once you have all the data in front of you at once, it's much easier to get a grip on it.
halfs.zip (406KB) - A list of every single half-float.
This text file has come in very useful for me on several occasions, and I'd recommend keeping a copy of it around for any time you're doing graphics work. Let's see what we can discover from this. We'll start by making a simple table, showing off the ranges at which the precision changes:
Exponent Starts at Step between each number 0 0 1/16777216 1 1/16384 1/16777216 2 1/8192 1/8388608 3 1/4096 1/4194304 4 1/2048 1/2097152 5 1/1024 1/1048576 6 1/512 1/524288 7 1/256 1/262144 8 1/128 1/131072 9 1/64 1/65536 10 1/32 1/32768 11 1/16 1/16384 12 1/8 1/8192 13 1/4 1/4096 14 1/2 1/2048 15 1 1/1024 16 2 1/512 17 4 1/256 18 8 1/128 19 16 1/64 20 32 1/32 21 64 1/16 22 128 1/8 23 256 1/4 24 512 1/2 25 1024 1 26 2048 2 27 4096 4 28 8192 8 29 16384 16 30 32768 32 31 infinity/nans
There's quite a few surprises nestled away in here. Perhaps the most shocking is the extreme precision loss at the high end. After 32768.0, you're stepping over 32 integers at a time! Even as low as 1024.0, you're still stepping 1.0 each time. Just to ram that point home, numbers higher than 1024 lose all fractional precision.
The maximum half-float possible is only 65504. That's not very big for many applications. And even at that range, you're only accurate to +/- 32.
Thinking of storing UV co-ordinates at half precision? Think again. At the 1.0 range our halfs are only accurate to 1/1024. For a 4096x4096 texture that means they're only accurate to every 4 pixels.
Trying to store a displacement map at half-precision? If it's in the 0-1 range, you're effectively only getting the same accuracy as a 10-bit format. That might be OK for a simple effect, but don't try it for a heightfield.
To summarize, while half-floats are great and you should use them whenever possible, you have to check your range first. How much precision do you require? It's easy to assume that a floating-point format will just magically give you everything you need, but it's not always so. Once you get outside the 0-1 range, half-floats lose their appeal for many cases.
The Metaprogrammer September 6th, 2016
There are some topics which, if posted onto a forum or news site, cause programmers to spew out more blather than all the rest put together. Such topics include:
- What kind of office chair you should have.
- The benefits of a closeable office door over an open-plan office.
- The brand of keyboard you use to type with.
- The configuration or quantity of your monitor(s).
- Standing desks.
- How long it takes you to resume work after an interruption.
You post an article about a new programming language, you'll get 10 replies. But start a discussion on what headphones you wear while you work, and ten-thousand people will rise up from nowhere, pushing the thread ever-skywards on a tower of upvotes. These things are not programming, but you can bet your bottom dollar that they'll get the most programmer attention every time they come up. Therefore we can suppose that this must be metaprogramming. You might have thought that metaprogramming meant macros, type introspection, that kind of thing. Nope, I'm hijacking the word for today:
- (verb.) The act of talking about programming, rather than doing any actual programming.
Perhaps like the screenwriter who can only write if they are seen to be writing whilst in Starbucks, the metaprogrammer is concerned more with the appearance of work than the work itself. It's a kind of Schrödinger's Programmer - only by observing the programmer can we collapse the wave function. If the programmer is not seen, does he really even exist? Does an unwatched programmer begin to fade away like the Cheshire Cat, until all that remains is the fedora?
A coder I know (whom I won't name) once said that he would only ever consider someone to be a good programmer if they had a twitch livestream about programming. As if a programmer who isn't visibly metaprogramming on live TV can't even be considered to be competent. They sure have some exciting ideas for streaming entertainment nowadays: "Come see the wonderous optimizing! Watch live as he waits for Visual Studio to load!"
The metaprogrammer needs to be pampered. If they don't have the shiniest Apple MacBook then they can't work. Never mind that their job involves typing letters into a text file, something you could have done on CP/M back in '78. Gotta have that 40" monitor, it's essential, can't work without it. To insult a new hire by providing only a single 4:3 monitor? I can't work like this. This keyboard doesn't even have any OLED keycaps on it. This is an outrage.
Maybe it's just an insecurity, and they need these things to feel better about the work. Much like a comfort blanket perhaps, or a little desk toy they keep on their workstation. Can't program without it, it helps me think. There's another one -- "workstation". No, I couldn't possibly use something as pedestrian as a "computer", I need a workstation dammit. With chrome plating and fuel injection.
If you want to run a startup today, you gotta have a cafe. That's the perk people want above all else. Ask people why they want to work at Google, they won't talk of their desires to work on world-changing projects, or the opportunity to apply cutting-edge tech. No, they'll say "because of the free food!". The perks, my friend. The perks are everything. No-one cares about what you do there, but whether you get a free massage, or bagels supplied every morning like manna from heaven.
I've seen people almost come to blows over who gets the Aeron chair. I've seen artists who demand the luxurious corner office, with the luxurious view, and then put paper over the windows to stop the light coming in. I've seen companies buy MacBook laptops for every employee, even though they're never taken away from the desks.
Some of the best programmers I've ever worked with don't have twitter accounts. It's almost unthinkable, isn't it? How can they possibly be one of the top programmers in the world, building the most successful projects out there, without being seen to be doing it? You can write beautiful code, the best code in the world, but it doesn't mean a damn today if you didn't blog about your new standing desk.
When did the messenger become more important than the message?
Written by Richard Mitton,
software engineer and travelling wizard.
Follow me on twitter: http://twitter.com/grumpygiant