7 October 2021

Simulated Reality with Dimenco

At Brandwidth, we’re fortunate enough to regularly gain access to exciting new technology and hardware, in order to put them through their paces. It’s always a great opportunity for us to gain new ideas on how to enhance existing client projects, as well as discovering new experiences that we can bring to them. Through our various partnerships and connections we’ve had early access to Google Glass, the ODG R9s, and the Magic Leap 1, to name but a few. These opportunities help us always stay at the forefront of the latest technology, and on a personal level, it’s a definite perk of the job!

Most recently Dimenco kindly let us test drive one of their SR (Simulated Reality) Devkits to see what it’s capable of, and whether it could tie into a couple of our upcoming projects. Here, I wanted to share some of my initial thoughts on the hardware, and where I think it may lead. Spoiler; it impressed.

The first thing that struck me when this kit arrived at the door of the relatively small Airbnb where I was staying was its size – it isn’t small that’s for sure! But when you realise what it’s packing, you can see why; it’s essentially a super powerful all-in-one PC, with a 32”, 3D, 8K, elevated display built in. It also has a Leap Motion built into the speaker panel allowing for some unique user interactions –  another fun bit of kit Brandwidth has had many opportunities to familiarise ourselves with over the years!

Soon I managed to unbox the device and found a home for it on the dining table of the predominantly yellow themed living room, (its giant box now covered with a cloth becoming the perfect extra bit of furniture for our growing bottle collection!)

After turning on the dual switches (one for the display and one for the computer) it quickly booted up and I was greeted by a familiar windows desktop. Placed in the centre were a selection of application icons for some simulated reality software samples, so this is where I started.

My first experience of what the SR Devkit can do came in the form of viewing a 3D trainer. It appeared to be floating right in front of my eyes, about half of it appearing to stick right out of the display, and the rest sinking further in, as though it were positioned floating through a window. I was quite impressed by the sense of depth, something I think was helped by the large canvas on which the content could be presented.

The second demo was a very similar concept, but this time a high-quality 3D render of a person’s face – think Unreal’s MetaHuman. It was a bit surreal to be looking into the inanimate eyes, but that weirdness was also just a testament to how realistic it both looked and felt.

It wasn’t until my brain finally caught up and realised that these amazing 3D effects were being achieved without glasses, or any other external device that would usually be required, that I realised just how impressive this bit of kit was. Not only was the depth effect so strong, but it had managed to do this without me even realising.

I quickly moved onto the other demos to see what else it could do, next choosing a simple zombie catapult game which turned out to be quite a lot of fun. The premise was simple; fire the catapult at the incoming zombies to stay alive as long as possible. The catapult is similar to what you’d expect to see if you’ve ever played Angry Birds, and you fire it by grabbing it with your hand to pull it back.

This makes use of the built in Leap Motion I mentioned earlier; and when it worked, it was remarkable. The calibration between the 3D effect and the positioning of the Leap Motion was spot on, so reaching out and moving the catapult felt like nothing I’ve experienced, it was seriously cool.

However, as I’ve found with the Leap Motion before, its use can feel a bit hit and miss at times. If your hand goes out of the inverted pyramid shaped tracking area the sync can be a little slow, and oftentimes clunky to get back, and when engrossed and staring at the display, it was something that could happen more than I’d like.

That said, the 45 degree configuration of the sensor is probably one of the best examples I’ve seen of the Leap Motion, and I’m sure something that will only improve when Leap eventually release an updated version of their hardware.

Overall, the experience of the demo content was very impressive. The kit seems to achieve its stunning 3D results by using what appears to be an evolution of, what I’ll refer to as, 3D TV tech, combined with the eye tracking sensors that are positioned just above the display panel.

As you move your head around the sensors pick up this movement and adjust the virtual camera position in the 3D space accordingly. The sensors largely do a good job of this, however, much like the Leap Motion, the tracking area appears to be limited, and if you accidentally move your face out of the predetermined area it can break the illusion. While usually this break in the experience just makes you briefly lose control, with the 3D effect broken the effect can be quite jarring.

It wasn’t a big deal to lose tracking briefly when viewing a 3D model like the trainers or the face, as it was relatively quick to find the sweet spot again, but in the case of some of the larger demos, where the kit was showcasing entire environments, it was actually quite painful on my eyes. This is, however, a first iteration devkit, and much like the leap I see this improving over time. The dual sensors on either side of the display certainly grant a bigger potential than the small Leap Motion too, and can more easily keep you in sight.

This issue did highlight one other limitation of the hardware though which, whilst not a dealbreaker, could certainly narrow the scope of where we implement this tech going forward. The issue is that you can only track, and therefore aim, the content at one person at a time. I discovered this fairly early when trying to show my girlfriend the unnerving 3D face from the demo I mentioned earlier. As soon as the eye sensors detected and redirected to her I lost the depth effect, and it suddenly looked very blurry to me; despite looking crisp to her, for me it was as though I was watching a 3D film at the cinema and had forgotten to put the glasses on!

………………….

Now, trying the demos was a lot of fun, but the main reason for getting the devkit was to see what we could create for it. Among others, the SR Devkit allows for content creation by using the Unity Engine, a programme we at Brandwidth use for the majority of our 3D work.

The dev kit comes with some examples, as well as an easy to use Unity plugin. This meant that in no time at all I was able to start placing my own 3D models into a scene to view them on the display.

Based on my experience from trying the demos, I decided to put as much emphasis as I could on my central target object, avoiding any unnecessary background clutter. After a few quick attempts at adjusting the object position I managed to find that sweet spot of floating just between the real world and the virtual one, the effect that had excited me so much on first impression.

Using a few trivial bits of extra code it wasn’t a challenge to add a bit of motion to the object, and the results were pretty cool. As mentioned, we do a lot of work in Unity, so it was pretty quick to swap in a few different assets from our other demos to see what worked best. The automotive content was particularly impactful to see with the doors and wheels animated or transitioning in. While a traditional 3D display may have support for a left and a right eye in a single static position on a horizontal plane, using the technology here you can support changing the depth effect based on moving in all 3 physical axis.

A non-interactive 3D model is a good start, but not a particularly deep experience in itself. However, the way of giving users control over these experiences is also fairly easy to implement, and there is plenty of choice on how to do this. The Leap Motion route is fairly straightforward to code for, and they have a good array of examples to get you started; however, for users that prefer a more traditional UX you can add mouse and keyboard controls. It’s easy enough to include both in your experience, and it’s something I would encourage where the experience allows.

My time playing and learning about the Dimenco SR devkit was very enjoyable. While I’ve highlighted some potential limitations with the current tech, in the right circumstances the output is stunning. It’s also worth mentioning that the future roadmap looks extremely positive, and it sounds like the new hardware offerings will alleviate most of my concerns.

Even before these new iterations come along, I can’t wait to take what I’ve learnt back to some clients to see how this may fit into some of the work we’ve been discussing. There’s definitely some goals for which this might provide part of the solution, and I’d love to get some more time hands on developing for Simulated Reality!