XR101: Introduction to Enterprise XR

Extended Reality - or “XR” - is a blanket term that covers several related and overlapping computer technologies. XR is just a shortcut, like saying “computer science” instead of something more specific like AI. XR can be Virtual Reality (VR), where you wear a headset that replaces your external reality, or could be Augmented Reality (AR), where images and 3D models are added to the world around you through a headset or handheld device. XR can also mean Windows Mixed Reality (WMR), or Holographic displays like HoloLens or Magic Leap, or whatever new technology we conjure up in the coming years. In my next blog post, XR102, I cover XR terminology and basic technologies in more depth, so give that a look.

The genesis of the “XRxxx” series of blog posts was the many conversations I’ve had with our enterprise clients on everything from terminology to various technologies, and more specifically, why they need specific (and often expensive) hardware to do what they want to do. Generally, you can’t take a large 3D building model from Autodesk Revit and dump it into VR and expect it to run well (if at all) on lightweight consumer-level hardware.

In keeping with those conversations, at the end of this article I’ll give you a quick answer to the question, “What do I need to buy?”

These large 3D datasets and the resulting computer program constitute an Enterprise-level application, ones with a high polygon count and tens to hundreds of megabytes of textures. Enterprise applications are computationally-demanding to create and edit and are particularly demanding for hardware to display comfortably in VR. A “comfortable” VR experience starts with being able to display stereo images at 90 frames-per-second (fps) in your headset, and 90fps is what I consider a real-time experience for these posts. VR is also more demanding than other XR technologies in many ways, as we’ll see.

For myself, these blog posts are an opportunity to dig deeper into GPU hardware to better understand the many differences between the generations and classes of GPUs and to learn more about what affects performance for real-time applications. Much of my focus will be on professional GPUs like NVIDIA Quadro and AMD FirePro, and I’ll also look at the top-of-the-line consumer grade GPUs and how those differ in capacity, speed, and capabilities.

The new NVIDIA RTX technology has been a focus recently, so you’ll see more about that architecture than others in these blog posts. In later posts, I take a deep dive into the latest in hardware and dig deep into what it takes for the most demanding applications. GPUs are some of the most complex and amazing things created by humanity and are worth some quality time to understand.

I hope you learn something, too. I’ll work to distill things down to their essence as well as provide details for those that are interested, as I am, in the twiddly bits along the edges. Let’s dive in!

Understanding the Need for GPUs

For as long as there have been managers and IT departments, artists and engineers have had to work to justify the equipment they need to get the job done efficiently and with high quality. Software and hardware are tools, after all, not toys, and not everyone appreciates that simple fact.

Efficient content creation and a performant end-user experience require significant horsepower to achieve. Cutting corners on hardware cost impair both artist productivity and can significantly impact the end-user experience, as I’ll talk about more later.

Everyone understands that you need a lot of horsepower in your racecar if you want to win races – the talent behind the wheel only goes so far. It is the same for digital content creation when running tools like Autodesk Revit and 3ds Max and is especially critical in real-time output in VR and Mixed Reality. (I will cover High-Performance Computing (HPC) in a separate blog post.)

Let’s start this off with a couple of simple questions to answer:

  1. When it comes to a comfortable, real-time, enterprise-level VR or Mixed-Reality experience, which is more important, the CPU or the GPU?
  2. What is the GPU contributing to the VR experience; do we need a high-end GPU?

For question #1, the answer is both; it isn’t an either-or situation. On the high-end, there are points of diminishing returns for both CPU and GPU depending on your application. Conversely, there are points on the low-end where a CPU or GPU becomes a performance bottleneck.

A lot depends on the complexity of what you are trying to run in real-time. If you are creating extended reality applications, then a CPU is very critical to compiling code and shaders and baking lighting, for instance, and there is generally no point of diminishing returns on better computer hardware. Despite having many cores, CPUs are generally best for sequential and moderately-parallel operations, and not massively parallel operations like rendering an image.

The answer to question #2 ties into the answer to question one; when it comes to simulating crowds or other complex entities in-game, the CPU is a critical factor in performance, and the more cores you throw at it – and the higher the frequency you feed those cores – the better the result. Having a CPU with a turbo feature – which runs one core very fast (and hot) for a limited time – can greatly help the single-threaded nature of many programs.

Game engines can take advantage of more cores, but it doesn’t mean you need a dual Xeon or Threadripper machine to explore your new building in VR. However, complex traffic or crowd simulations could certainly benefit from more CPU cores as the CPU needs to (for instance) generate the positions of all the objects and deform character meshes before the GPU displays the change. The CPU is general computing for the entire experience. The GPU, in contrast, is specialized hardware to:

  • Hold all the CPU-processed 3D geometry and textures in its memory for rendering by the GPU
  • Translate that data into the requested views, and
  • Render the frames to your monitor and head-mounted devices

Like a CPU, GPUs have cores, rated frequency, and memory, and the speed and quantity of each part contribute to how much data it can hold, how fast that data is translated, and how quickly it can render the frames needed for your experience. GPUs may contain several different types of cores and specialized parts that are small, numerous, and purpose-built. GPUs and cards may include specifications for (with NVIDIA GPUs):

  • Graphics Processing Clusters (GPCs)
  • Streaming Multiprocessors (SM)
  • Shader Cores (CUDA Cores)
  • Texture Processing Clusters (TPCs)
  • Texture Map Units (TMUs)
  • Render Output Pipeline (ROPs)
  • Memory Controllers
  • Ray Trace Cores (RTCs) (available in NVIDIA Turing Architecture)
  • Tensor Cores (available in NVIDIA Turing Architecture)

I’ll look at these components in more detail later in these XR blog posts.

Different GPU chips and different classes of cards will have varying speeds and quantities of these cores. The special-purpose and specific nature of these cores are why GPUs are more than just a “frame buffer” that displays an image; they are the essential part of the generation of every pixel displayed in real-time and engineering applications.

The GPU card is the main component that will determine the size of the data you can simulate, and how fast that data is shaded and displayed.

Rapidly Changing Technology

GPU hardware is a field where the technology seemingly changes by leaps and bounds every few months, and where the hardware continues to diverge into areas of increasing specialization. There is a wide range in card capabilities and cost, and purchasing the wrong card can cost you in in a lack of performance (lost time in creating content or poor user experience), or in the expense of overkill in a card.

Over the years, GPUs have evolved from a straightforward graphics adapter to a workstation accelerator, to a massively-parallel computing device, and today a GPU can also accelerate artificial intelligence. Staying on top of the continuing evolution can seem daunting at times.

Newer high-end GPUs can support numerous 4k and 8k displays and support high dynamic range (HDR) color output and advanced video processing. The newest RTX cards from NVIDIA are the culmination of a decade of research and as many as 8000 person-years of development into accelerating ray-traced rendering. The RTX cards are a game changer, in the view of many, and the start of something new.

GPUs have also dominated in areas where the numerous small-but-powerful cores of the GPU can be programmed applications such as rendering, simulation, and data analysis. I’ve done a tremendous amount of rendering on GPUs over the years and was an early adopter of hardware-accelerated rendering and ray tracing going back many years before GPU rendering. I’m a fan.

Rapidly advancing GPU technology certainly is a key to the success of the gaming industry, where upgrading to a new GPU can easily breathe new life into a gaming computer and allow for more and better graphics for top-of-the-line games. Although my articles are focusing on professional-grade GPUs, I’ll also be looking at high-end consumer VR-rated GPU cards to see where things differ from the pro card versions, and where the “prosumer” level cards may be the right choice for you.

The next questions to answer here are, as an artist, developer, or end-user, what are the top-level things that you need to know when choosing a GPU for your professional workstation, and what are the technical specs that go into the making of a good VR or Mixed-Reality (MR) real-time system?

Specifying VR Hardware

Since you likely don’t want to wait for the next few blog posts to know what I recommend for you, here is the bottom line if you need something today:

  1. Real-time apps and creation tools take advantage of multi-core CPUs, so don’t scrimp on that expense. A high-end AMD Threadripper or Intel i9 will be ideal for both enterprise-level content creation and experiencing VR. Intel i7 and Ryzen chips work well, too, for less complex simulations.
  2. For GPUs, the amount of memory is critical for geometry and texture-heavy enterprise applications. The high-end consumer cards, like the GeForce RTX 2080Ti with 11GB of memory, is likely good for many purposes. Realistically, though, a Quadro RTX 6000 with 24GB of memory will be future-proof for quite a while, and expandable as we’ll see later. In Tom’s Hardware’s review of the Best GPUs for 2018, the GeForce GTX 1080 is a great value for VR with 8gb memory.
  3. Look for systems and components built for VR or marked as “VR Ready.” It is at least a good start, but remember, you generally get what you pay for; “VR Ready” doesn’t mean “Enterprise Ready.”

For GPUs, the new NVIDIA RTX series of GPUs are state of the art and would be our choice in either a new system or if you are considering an upgrade.

A new GPU is often all that you may need to get the performance you need out of an existing high-performance computer.

Note that features on newer GPUs might not be supported initially in your development tools, but at least you are ahead of the curve.

If history is a guide, then projects always get progressively larger, and the demands on your system always increase exponentially as a result. Adding to the ever-larger datasets, newer HMDs sport larger displays with each new generation, upping the computing requirements for every frame produced. If you are investing, then plan to future requirements. It is what I do for my systems, and what I advise to anyone serious about XR.

You may not need an NVIDIA Quadro P6000, as shown in my workstation, above. But if you plan on doing Enterprise-scale XR, then expect to invest in hardware that is up to the task.

In the next post, I look at industry terminology for the many technologies that make up XR.

Jenni O’Connor, CEO @ NextGen XR.