27 December 2017

Centralized reusable audio feedback mechanisms for Mixed Reality apps

Intro – feedback is key

Although speech recognition in Mixed Reality apps is very good, sometimes the best recognition fails, or you slightly mispronounced something. Or the command you just said is recognized, but not applicable in the current state. Silence,  nothing happens and you wonder - did the app just not understand me, is the response slow or what? The result is always undesirable – users wait for something to happen and nothing does. They start to repeat the command and halfway the app executes the first command after all, or even worse – they start start shouting, which makes for a quite embarrassing situation (both for user and bystanders). Believe me, I’ve been there. So – it’s super important to inform your Mixed Reality app’s user  that a voice command has been understood and is being processed right away. And if you can’t process it, inform the user of that as well.

What kind of feedback?

Well, that’s basically up to you. I usually choose a simple audio feedback sound – if you have been following my blog or downloading my apps you are by now very familiar with the ‘pringggg’ sound I use in every app, be it an app in the Windows Store or one of my many sample apps on GitHub. If someone uses a voice command that’s not appropriate in the current context or state of the app, I tend to give some spoken feedback, telling the user that although the app has understood the command, can’t be executed now and if possible for what reason. Or prompt for some additional action. For both mechanisms I use a kind of centralized mechanism that uses my Messenger behaviour, that already has played a role in multiple samples.

Project setup overview

The hierarchy of the project is as displayed below, and all is does is showing the user interface on the right:

imageimage

If you say “Test command”, you will hear the “pringggg” sound I already described, and if you push the button the spoken feedback “Thank you for pressing this button”. Now this is rather trivial, but it only serves to show the principle. Notice, by the way, the button comes from the Mixed Reality Toolkit examples – I described before how to extract those samples and use them in your app. 

The Audio Feedback Manager and Spoken Feedback Manager look like this:

imageimage

The Audio Feedback Manager contains an Audio Source that just contains the confirmation sound, and a little script “Confirm Sound Ringer” by yours truly, which will be explained below. This sound is intentionally not spatialized, as it’s a global confirmation sound. If it was spatialized, it would also be localized, and the user would be able to walk away from confirmation sounds or spoken feedback, which is not what we want.

The Spoken Feedback Manager contains an empty Audio Source (also not spatialized), a Text To Speech Script from the Mixed Reality Toolkit, and a the “Spoken Feedback Manager’ script also by me.

ConfirmSoundRinger

using HoloToolkitExtensions.Messaging;
using UnityEngine;

namespace HoloToolkitExtensions.Audio
{
    public class ConfirmSoundRinger : MonoBehaviour
    {
        void Start()
        {
            Messenger.Instance.AddListener<ConfirmSoundMessage>(ProcessMessage);
        }

        private void ProcessMessage(ConfirmSoundMessage arg1)
        {
            PlayConfirmationSound();
        }

        private AudioSource _audioSource;
        private void PlayConfirmationSound()
        {
            if (_audioSource == null)
            {
                _audioSource = GetComponent<AudioSource>();
            }
            if (_audioSource != null)
            {
                _audioSource.Play();
            }
        }
    }
}

Not quite rocket science. If a message of type ConfirmSoundMessage arrives, try to find an Audio Source. If found, play the sound. ConfirmSoundMessage  is just an empty class with not properties or methods whatsoever – it’s a bare signal class.

SpokenFeedbackManager

Marginally more complex, but not a lot:

using HoloToolkit.Unity;
using HoloToolkitExtensions.Messaging;
using System.Collections.Generic;
using UnityEngine;

namespace HoloToolkitExtensions.Audio
{
    public class SpokenFeedbackManager : MonoBehaviour
    {
        private Queue<string> _messages = new Queue<string>();
        private void Start()
        {
            Messenger.Instance.AddListener<SpokenFeedbackMessage>(AddTextToQueue);
            _ttsManager = GetComponent<TextToSpeech>();
        }

        private void AddTextToQueue(SpokenFeedbackMessage msg)
        {
            _messages.Enqueue(msg.Message);
        }

        private TextToSpeech _ttsManager;

        private void Update()
        {
            SpeakText();
        }

        private void SpeakText()
        {
            if (_ttsManager != null && _messages.Count > 0)
            {
                if(!(_ttsManager.SpeechTextInQueue() || _ttsManager.IsSpeaking()))
                {
                    _ttsManager.StartSpeaking(_messages.Dequeue());
                }
            }
        }
    }
}

If a SpokenFeedbackMessage comes in, it’s added to the queue. In the Update method, SpeakText is called, which first checks if there are any messages to process, then checks if the TextToSpeech is available – and if so, it pops the message out of the queue and actually speaks it. The queue has two functions. First, the message may come from a background thread, and by having SpeakText called from Update, it’s automatically transferred to the main loop. Second, it prevents messages being ‘overwritten’ before they are even spoken.

The trade-off of course is that you might stack up messages if the user quickly repeats an action, resulting in the user getting a lot of talk while the action is already over.

On the Count > 0 in stead of any – apparently you are to refrain from using LINQ extensively in Unity apps, as this is deemed inefficient. It hurts my eyes to see it used this way, but when in Rome…

Wiring it up

There is a script SpeechCommandExecuter sitting in Managers, next to a Speech Input Source and a Speech Input Handler, that is being called by the Speech Input Handler when you say “Test Command”. This is not quite rocket science, to put it mildly:

public class SpeechCommandExecuter : MonoBehaviour
{
    public void ExecuteTestCommand()
    {
        Messenger.Instance.Broadcast(new ConfirmSoundMessage());
    }
}

As is the ButtonClick script, that’s attached to the ButtonPush:

using HoloToolkit.Unity.InputModule;
using HoloToolkitExtensions.Audio;
using HoloToolkitExtensions.Messaging;
using UnityEngine;

public class ButtonClick : MonoBehaviour, IInputClickHandler
{
    public void OnInputClicked(InputClickedEventData eventData)
    {
        Messenger.Instance.Broadcast(
            new SpokenFeedbackMessage { Message = "Thank you for pressing this button"});
    }
}

The point of doing it like this

Anywhere you now have to give confirmation or feedback, you now just need to send a message – and you don’t have to worry about setting up an Audio Source, a Text To Speech and wiring that up correctly. Two reusable components take care of that. Typically, you would not send the conformation directly from the pushed button or the speech command – you would first validate if the command can be processed in the component that holds the logic, and then give confirmation or feedback from there.

Conclusion

I hope to have convinced you of the importance of feedback, and I showed you a simple and reusable way of implementing that. You can find the sample code, as always, on GitHub.

26 December 2017

Short tip: using the UI components from the Mixed Reality Toolkit examples

The Mixed Reality Toolkit is the foundation for nearly all apps built for HoloLens and Windows Mixed Reality immersive head sets. It has a lot of indispensable components that make building these apps not so much a breeze, but al least a lot less tedious. It’s a bit sparse as far as actual UI components is concerned. They are now in the process of merging the awesome UI stuff from the Mixed Reality Design Labs project in it. This process is not ready by far, so far there’s only some simple buttons in it.

But if you look into this part of the code in the Mixed Reality Toolkit on GitHub , you will notice there being three folders:

image

The last one contains the actual Mixed Reality Toolkit, the 2nd the Unit Test which are not very interesting for us right now, but the first contains a whole lot of examples. And in those examples, a whole lot of nice UI controls like a host of buttons, a slider, a check box, toggles, toolbars, a very cool loader animation, helpers to draw lines and curves and probably a lot I have not noticed yet. There’s also a lot of demo scenes to show them off. But the Examples folder contains 114 MB of assets, almost doubling the size of already hefty MRKT itself.

Extracting only the UI elements is pretty simple:

  • Clone the project from Github
  • Copy the whole HoloToolkit-Examples into your projects Assets folder, next to the HoloToolkit
  • Delete everything but the these two folders:
    • HoloToolkit-Examples\UX
    • HoloToolkit-Examples\Prototyping
  • [optional] remove the “Scenes” subfolder from both the UX and the Prototyping folder. But maybe you should have look around first.

So you will end up with the following asset folder structure:

image

And then you can use cool stuff like this, as shown in the InteractiveButtonComponents scene:

image

Or these holographic buttons as you can find in ObjectCollectionExample scene (amongst a lot of other stuff)

image

And these samples of how to draw lines in the ‘air’ in the LineExamples scene

image

So far I have only used the icon buttons, but those are pretty cool in itself because they quite resemble the buttons in the Holographic shell, so your app’s user should immediately recognize what they are for.

No separate code this time, as all the code is in the MRTK repo itself. I do encourage your to root around through it, there’s quite some cool surprises in there.

06 December 2017

Getting your logos and splash screens right for Mixed Reality apps

Intro

Way too often I still see Mixed Reality apps with default (white) splash screens, app windows, or even worse – the default Unity Icon. When Mixed Reality was HoloLens-only and brand new, you kinda got away with that, as the WOW factor of the actual app eclipsed almost everything else, and the iconography did not matter much as there was only the Holographic Start menu. Also, although judging from my download numbers a fair number of HoloLenses are in use worldwide, the number is pretty limited compared to, say, PCs.

These days of leeway are over. With the advent of the Fall Creators Update, Mixed Reality apps can run on PCs with the right hardware – and in stead of a $3000+ device, a $400 headset will do. And as soon as I made an app available for immersive headsets, my download numbers exploded. Much more eyeballs so much more visibility, and in Windows, the flexible start menu and it’s different tile sizes makes omissions in the iconography glaringly visible. I also notice ‘ordinary’ users are a lot less forgiving. So, time to step up your game.

Note: I am currently using Unity 2017.2.0p2 MRTP5. Things may change. They tend to do that very much in this very new and exiting field :)

So what do you need?

At the very minimum, and icon of size 620x620 and a 1240x600 splash screen. 620x620 is the 310x310 icon at 200% scale. I tend to used 200% icons as they are the default sizes that are created by the Unity editor. I give the start 620x620 file the default name “Square310x310Logo.scale-200.png” and go from there.

In my Unity Assets folder I create a folder “UWPassets” and I drag the “Square310x310Logo.scale-200.png” in imagethere. I then proceed to make the following file from the first one

  • Wide310x150Logo.scale-200.png (620x310 pixels)
  • Square150x150Logo.scale-200.png (300x300 pixels)
  • Square71x71Logo.scale-200.png (142x142 pixels)
  • Square44x44Logo.scale-200.png (88x88 pixels)
  • StoreLogo.scale-100.png (50x50 pixels)

Do pay attention to the last one – that‘s 100% scale, so it’s actual size. To the right you see my actual icon.

These are just the standard sizes for UWP apps, if you are coming in from a Unity perspective you may not be familiar with them.

And then, as I said, you need the splash screen. This needs to be 1240x600. I have created this ‘awesome’ splash screen:

image

I suggest you take something that says something more about your app, but this will to for an example.

The result in the Unity editor:

image

Putting icons in the right place in Unity

This is not rocket science: Hit File/Build Settings and then the button “Player Settings”. This will open the “PlayerSettings” pane on the right side of your editor. First, you open the pane “Icon” and then the section “Store logo”. From the UWPAssets folder, drag the “StoreLogo.scale-100” into the top box.


image

You can leave the rest empty. Then skip all the other sections, scroll all the way down to the section “Universal 10 tiles and logos”. Open “Square 44x44 Logo”

image

Drag the “Square44x44Logo.scale-200.png” into the 4th box from the top, marked 200%. Then open section “Square 71x71 Logo” and drag  “Square71x71Logo.scale-200.png” in. Proceed similarly with the remaining three sizes till all the 200% boxes are filled.

Splash screen, splash screen and splash screen

Now dig this – there are actually three places to put splash screens in. Potentially you can have different splash images - I tend to use the same one everywhere. So scroll further down to the Splash Screen section. First up, on top, is the “Virtual Reality Splash Image”. You can drag your splash screen in there if you want.

image

Next up, a bit further down, is the Windows splash screen:

image

This is the most important one. This appears on your desktop when the app starts, it also is shown in a rectangle in Immersive Headsets during the app ‘transition scene’ (when your Cliff House is replaced by the app), and it’s displayed on your default app screen:

imageimage


And finally, there’s Windows Holographic Splash screen:

image

This is actually important in a HoloLens. As this does not have a ‘transition scene’, it shows a floating splash screen for a few seconds. You can see it in an immersive headset too, but usually for a very short time – most of the time you see it flash by in a split second before the actual app appears.

So where is the very first splash screen for? To be honest, I have no idea. Possibly it’s for other, non Windows Mixed Reality based apps.

Just tiny thing left

I tend to generate the app in a subfolder “App” in the root of the project. Typically, I place a .gitignore in there that, look like this:

*.*
*

true to it’s name, it ignores – everything. Because, well, it’s a generated app, right? But if you go into the generated app’s assets (App\SplashScreen\Assets) you will notice something odd:image

Although we nicely created an icon for every size, there’s still this pesky “Square44x44Logo.targetsize-24_altform-unplated.png” icon with the default logo that – interestingly, is 24x24 pixels big. So where is this used? Apparently in the task bar and – as I have noticed – in the developer portal. So although your customers won’t see it in the Store (I think), you will, it will clutter op your app list in the developer portal, and that’s annoying. I have found no way to actually create this from Unity, so I take the easy approach: I create yet another icon, now 24x24, overwrite that “Square44x44Logo.targetsize-24_altform-unplated.png” in  App\SplashScreen\Assets and add it to Git manually there. Unity won’t overwrite it, and if it get’s deleted or overwritten, I can always revert. You just need to remember to check the icon before actually submitting.

Conclusion

So now you know how to properly assign the minimum iconography and splash screens, and I let’s agree on no longer publishing stuff with default icons or empty splash screens to the store, right? ;)

Project code – although it does not actually do anything – can be found here. But it’s useful reference material.

12 November 2017

Finding the floor - and displaying holograms at floor level in HoloLens apps

Intro

Both of my HoloLens apps in the store ask the user to identify some kind of horizontal space to put holograms on, in one case Schiphol Airport, in the other case a map of some part of the world. Now you can of course use Spatial Understanding, which is awesome and offers very advanced capabilities, but requires some initial activity by the user. Sometimes, you just want to identify the floor or some other horizontal place - if only to find out how long the user is. This the actual code I wrote for Walk the World and extracted it for you. In the demo project, it displays a white plane at floor level.

Setting the stage.

We start with creating and empty project. The we proceed with importing the newest Mixed Reality Toolkit. If you have done that, you will notice the extra menu option "HoloToolkit" no longer appears, but it now says "Mixed Reality Toolkit". It still has three settings, but one of them has profoundly changed: the setting "Apply Mixed Reality Scene Settings" basically adds everything you need to get started:

imageimage

  • A "MixedRealityCameraParent" - the replacement for the HoloLensCamera that encapsulates a camera that will work both on HoloLens and immersive headsets.
  • A default cursor
  • An input manager that processes input from gestures, controllers or other devices depending on whether your app runs on a HoloLens or an immersive headset

Now I tend to organize stuff a little bit different, so after I have set up the scene I have like this:

image

I make an empty game object "HologramCollection" that will hold my holograms, and all the standard or none-graphic I tend to chuck in a game object "Managers". Notice I also added SpatialMapping. If you click the other two options in the Mixed Reality Toolkit/Configure menu our basic app setup is now ready to go.

Some external stuff to get

Then we proceed to import LeanTween and Mobile Power Ups Vol Free 1 from the Unity Store. Both are free. Note – the latter one is deprecated, but the one arrow asset we need from it still is usable. If you can't get it from the store anymore, just nick it from my code ;)

image

Recurring guest appearances

We need some more stuff I wrote about before:

  • The Messenger, although it’s internals have changed a little since my original article
  • The KeepInViewController class to keep an object in view – a improved version of the MoveByGaze class about my post about a floating info screen
  • The LookAtCamera class to keep an object oriented to the camera

Setting up the initial game objects

The end result should be this:

image

HologramCollection has three objects in it:image

  • A 3DTextPrefab "LookAtFloorText"
  • An empty game object "ArrowHolder"
  • A simple plane. This is the object we are going to project on the floor.

Inside the ArrowHolder we place two more objects:

  • A 3DTextPrefab "ConfirmText"
  • The "Arrows Green" prefab from the Mobile Power Ups Vol Free 1. That initially looks like on the right

So let's start at the top setting up our objects:

LookAtFloorText

imageThis is fairly easy. We scale it to 0.005 in all directions, enter the text "Please look towards the floor" in the text mesh, and set a couple of other parameters:

  • Z position is 1. So it will spawn 1 meter before you.
  • Character size to 0.1
  • Anchor to middle center
  • Font size to 480
  • Color to #00FF41FF (or any other color you like)

After that, we drag two components on it

  • KeepInViewController
  • LookAtCamera

imageWith the following settings:

  • Max Distance 2.0
  • Move time 0.8
  • Rotate Angle 180

This will keep the text at a maximum distance of 2 meters, or closer if you are looking at an object that is nearer, it will move in 0.8 seconds to a new position, and with an angle of 180 degrees it will always be readable to you.

image

ConfirmText

This is hardly worth it's own sub paragraph, but for the sake of completeness I show it's configuration.

Notice:

  • Y = 0.27, Z is 0 here.
  • This has only the LookAtCamera script attached to it, with the same settings. There is no LookAtCamera here.




imageArrows Green

This is the object nicked from Mobile Power Ups Vol Free 1.

  • Y = 0.1, I moved it a little upwards so it will always appear just above the floor
  • I rotated it over X so it will point to the floor.
  • I added a HorizontalAnimator to it with a spin time of 2.5 seconds over an absolute (vertical) axis so it will slowly spin around.

Plane

The actual object we are going to show on the floor. It's a bit big (default size = 10x10 meters), so we are going to scale it down a little:

image

And now for some code.

The general idea

Basically, there are two classes doing the work, the rest only is 'support crew.

  • FloorFinder actually looks for the floor, and controls a prompt object (LookAtFloorText) that prompts you - well, to look at the floor
  • FloorConfirmer displays an object that shows you where the floor is find, and then waits for a method to be called that either accepts or reject the floor. In sample the app, this is done by speech commands.
  • They both communicate by a simple messaging system

The message class is of course very simple:

using UnityEngine;

public class PositionFoundMessage
{
    public Vector3 Location { get; }

    public PositionFoundMessage(Vector3 location)
    {
        Location = location;
    }

    public PositionFoundStatus Status { get; set; }
}

With an enum to go with it indicating where in the process we are

public enum PositionFoundStatus
{
    Unprocessed,
    Accepted,
    Rejected
}

On startup, the FloorConfirmer hides it's confirmation object (the arrow with text). FloorFinder shows it's prompt object. When it detects a floor, it sends the position in a PositionFoundMessage message with status "Unprocessed". It also listens to PositionFoundMessages. If it receives one that has status "Unprocessed" (which is, in this sample, only sent by itself), it will disable itself and hide the prompt object (the text saying "Please look at the floor").

If the FloorConfirmer receives a PositionFoundMessage of status unprocessed "Unprocessed" it will show it's confirmation object on the location where the floor is detected. And then, as I wrote, it waits for it's Accept or Reject method being called. If Accept is called, it resends the PositionFoundMessage with status "Accepted" to anyone who might be interested - in this app, that's a simple class "ObjectDisplayer" that shows a game object that has been assigned to it on the correct height below the user's head. If the Reject method is called, FloorConfirmer resend the message as well - but with status Rejected. Which will wake up the FloorFinder again.

Finding the actual floor

using HoloToolkit.Unity.InputModule;
using HoloToolkitExtensions.Messaging;
using HoloToolkitExtensions.Utilities;
using UnityEngine;

public class FloorFinder : MonoBehaviour
{
    public float MaxDistance = 3.0f;

    public float MinHeight = 1.0f;

    private Vector3? _foundPosition = null;

    public GameObject LabelText;

    private float _delayMoment;

    void Start()
    {
        _delayMoment = Time.time + 2;
        Messenger.Instance.AddListener<PositionFoundMessage>(ProcessMessage);
#if !UNITY_EDITOR
        Reset();
#else
        LabelText.SetActive(false);
#endif
    }

    void Update()
    {
        if (_foundPosition == null && Time.time > _delayMoment)
        {
            _foundPosition = LookingDirectionHelpers.GetPositionOnSpatialMap(MaxDistance, 
GazeManager.Instance.Stabilizer); if (_foundPosition != null) { if (GazeManager.Instance.Stabilizer.StablePosition.y - _foundPosition.Value.y > MinHeight) { Messenger.Instance.Broadcast( new PositionFoundMessage(_foundPosition.Value)); PlayConfirmationSound(); } else { _foundPosition = null; } } } } public void Reset() { _delayMoment = Time.time + 2; _foundPosition = null; if(LabelText!= null) LabelText.SetActive(true); } private void ProcessMessage(PositionFoundMessage message) { if (message.Status == PositionFoundStatus.Rejected) { Reset(); } else { LabelText.SetActive(false); } } private void PlayConfirmationSound() { Messenger.Instance.Broadcast(new ConfirmSoundMessage()); } }

imageThis has three public properties - a prompt object (this becomes the text "please look towards the floor", a maximum distance to try to find the floor, and the minimum height the floor should be below the user's head. These properties are set as displayed on the right:

The Update method does all the work - if a position on the spatial map is found that's at least MinHeight below the user's head, then we might have found the floor, and we send out a message (with default status Unprocessed). The method below Update, ProcessMessage, actually gets that message too and hides the prompt text.

The helper method "GetPositionOnSpatialMap" in LookingDirectionHelpers simply tries to project a point on the spatial map at maximum distance along the viewing direction of the user. It's like drawing a line projecting from the users head ;)

public static Vector3? GetPositionOnSpatialMap(float maxDistance = 2,
                                               BaseRayStabilizer stabilizer = null)
{
    RaycastHit hitInfo;

    var headReady = stabilizer != null
        ? stabilizer.StableRay
        : new Ray(Camera.main.transform.position, Camera.main.transform.forward);

    if (SpatialMappingManager.Instance != null &&
        Physics.Raycast(headReady, out hitInfo, maxDistance, 
        SpatialMappingManager.Instance.LayerMask))
    {
        return hitInfo.point;
    }

    return null;
}

Is this the floor we want?

using HoloToolkitExtensions.Messaging;
using UnityEngine;

public class FloorConfirmer : MonoBehaviour
{
    private PositionFoundMessage _lastReceivedMessage;

    public GameObject ConfirmObject;

    // Use this for initialization
    void Start()
    {
        Messenger.Instance.AddListener<PositionFoundMessage>(ProcessMessage);
        Reset();
#if UNITY_EDITOR
        _lastReceivedMessage =  new PositionFoundMessage(new Vector3(0, -1.6f, 0));
        ResendMessage(true);
#endif
    }

    public void Reset()
    {
        if(ConfirmObject != null) ConfirmObject.SetActive(false);
        _lastReceivedMessage = null;
    }

    public void Accept()
    {
        ResendMessage(true);
    }

    public void Reject()
    {
        ResendMessage(false);
    }

    private void ResendMessage(bool accepted)
    {
        if (_lastReceivedMessage != null)
        {
            _lastReceivedMessage.Status = accepted ? 
                 PositionFoundStatus.Accepted : PositionFoundStatus.Rejected;
            Messenger.Instance.Broadcast(_lastReceivedMessage);
            Reset();
            if( !accepted) PlayConfirmationSound();
        }
    }

    private void ProcessMessage(PositionFoundMessage message)
    {
        _lastReceivedMessage = message;
        if (message.Status != PositionFoundStatus.Unprocessed)
        {
            Reset();
        }
        else
        {
            ConfirmObject.SetActive(true);
            ConfirmObject.transform.position = 
                message.Location + Vector3.up * 0.05f;
        }
    }


    private void PlayConfirmationSound()
    {
        Messenger.Instance.Broadcast(new ConfirmSoundMessage());
    }
}

A rather simple class - it disables it's confirm object at startup. If it gets a PositionFoundMessage, two things might happen:

  • If it's an Unprocessed message, it will activate it's confirm object (the arrow) and place it on the location provided inside the message (well, 5 cm above that).
  • For any other PositionFoundMessage, it will deactivate itself and hide the confirm object

If the Accept method is called from outside, it will resend the message with status Accepted for any interested listener and deactivate itself. If the reject method is called, it will send the message with status Rejected - effectively deactivating itself too, but waking up the floor finder again.

And thus these two objects, the FloorFinder and the FloorConfirmer can work seamlessly together while having no knowlegde of each other whatsover.

The final basket

For anything to happen after a PositionFoundMessage with status Accepted is sent, there need to also be something that actually receives it, and acts upon it. I places the game object it's attached to the same vertical position as the point it received - that is, 5 cm above it. It's not advisable to do it at the exact vertical floor position, as stuff might disappear under the floor.I have found horizontal planes are never smooth or, indeed - actually horizontal.

using HoloToolkitExtensions.Messaging;
using UnityEngine;

public class ObjectDisplayer : MonoBehaviour
{
    void Start()
    {
        Messenger.Instance.AddListener<PositionFoundMessage>(ShowObject);

#if !UNITY_EDITOR
        gameObject.SetActive(false);
#endif
    }

    private void ShowObject(PositionFoundMessage m)
    {
        if (m.Status == PositionFoundStatus.Accepted)
        {
            transform.position = new Vector3(transform.position.x, m.Location.y,
                transform.parent.transform.position.z) + Vector3.up * 0.05f;
            if (!gameObject.activeSelf)
            {
                gameObject.SetActive(true);
            }
        }
    }
}

imageThis script is dragged upon the plane that will appear on the floor

Wiring it all together

FloorFinder and FloorConfirmer sit together in the Managers object, but there's more stuff in the to tie all the knots:

  • The Messenger, for if there are messages to be sent, there should also be something to send it around
  • A Speech Input Source and a Speech Input Handler. Notice the last one calls FloorConfirmer's Accept method on "Yes", and the Reject method upon no.







Adding some sound


If yoimageu download the source code and run it, you might notice my trademark "pringggg" sound when the app does something. You might also have noticed various scripts sending ConfirmSoundMessage messages. In the Managers object there's another game object called "ConfirmSoundManager". It has an audio source and a ConfirmSoundRinger, that as you might expect is not too complicated:











using HoloToolkitExtensions.Messaging;
using UnityEngine;

public class ConfirmSoundRinger : MonoBehaviour
{
    void Start()
    {
        Messenger.Instance.AddListener<ConfirmSoundMessage>(ProcessMessage);
    }

    private void ProcessMessage(ConfirmSoundMessage arg1)
    {
        PlayConfirmationSound();
    }

    private AudioSource _audioSource;
private void PlayConfirmationSound() { if (_audioSource == null) { _audioSource = GetComponent<AudioSource>(); } if (_audioSource != null) { _audioSource.Play(); } } }

Conclusion

And that's it. Simply stare at a place below your head (default at least 1 meter), say "Yes" and the white plane will appear exactly on the ground. Or, as I explained, a little above it. Replace the plane by your object of choice and you are good to go, without using Spatial Understanding over complex code.

As usual, demo code can be found on GitHub.

04 November 2017

Build for both from one source–Hololens and Immersive Apps

Intro

The time has come. As I predicted in my previous blog post, it’s now possible to build a HoloLens app and an Immersive Apps (targeted toward the new immersive headsets) from once source, thanks to the awesome work done by the folks working on the Mixed Reality Toolkit. No need to keep two branches anymore. There is still some fiddling to do, but that is merely following a script when you create and submit the final apps. And script - that is exactly what I am going to describe.

Stuff you will need

Build the HoloLens app

  • Open the project in Unity 2017.1.x
  • You may get the following popup if the project was last opened in Unit 2017.2.x:

image

  • Just click “Continue”
  • You may get a popup like this:

image

  • Click “Cancel”
  • Open the Build Settings (File/Build settings)
  • Select SDK 15063

image

  • I always create Unity C# projects, but that checkbox is not mandatory
  • Click “Player Settings”
  • Expand Panel “Other settings”

image

  • Make sure Windows Holographic is available. I always remove the “WindowsMR (missing from build)” if it says so. You will only see this if you previously have built a Immersive app using Unity 2017.2.x
  • Build the Visual Studio solution by clicking the Build button on the Build settings Window
  • Open the resulting solution
  • Open the Package.appmanifest in an XML editor (not the default editor – you will have to make manual changes).
  • Find the text “TargetDeviceFamily Name="Windows.Universal"  and change that to TargetDeviceFamily Name="Windows.Holographic"
  • While you are at it, check if MinVersion in that line is set to 10240 and MaxVersionTested is set to 10586
  • Build the store packages as usual. Don’t forget to set the configuration to Master and build only x86 packages, as those are the only one required (and supported) by HoloLens.
  • Run the WACK to see if nothing odd has happened.
  • Upload the packages to the store. The store should automatically select Holographic as only target. Don’t submit it yet. There’s more to add to it.

Build the Immersive App

  • Open the Unity project with Unity 2017.2.0f3-MRTP3
  • You might once again get a popup complaining about the non-matching editor. Just click “Continue” again
  • Likewise, press “Cancel” again if the editor complains about not being able to open library files
  • Open the Build Settings (File/Build settings)
  • Select SDK 16299

image

  • Click “Player Settings”
  • Expand the bottom panel, XR Settings
  • Verify Windows Mixed Reality is selected.

image

  • Build the Visual Studio solution by clicking the Build button on the Build settings Window
  • Open the resulting solution
  • Open the Package.appmanifest in an XML editor again
  • Find the text “TargetDeviceFamily Name="Windows.Universal"  and change that to  “TargetDeviceFamily Name="Windows.Desktop"
  • While you are at it, check if MinVersion and MaxVersionTested in that line are both set to 16299
  • For all projects in the solution, change the Min version to 16299. We don’t want this app to land on anything older than the Fall Creators Update, since only the FCU supports Windows Mixed Reality

image 

  • Build the store packages (configuration Master again).
    • Don’t forget to set the configuration to Master but this time build x86 and x64 packages, as those are the platforms supported for desktop PCs (although in reality, I think most if not all Mixed Reality capable PCs will be x64)
    • Make sure you don’t overwrite the HoloLens app you created earlier – choose a different folder or copy the generated packages.
    • Make sure – and this is important – the version numbers of both app are different. The store won’t accept two packages with the same number. As you can see I have created the HoloLens app with version 3.2.8.0 and the Immerse app with 3.2.9.0

image

  • Upload the package to the same submission.

If you have done everything right, it should kind of look like this, as I showed in my previous post:

image

This is an actual screenshot from the version I submitted successfully to the store last week and has just been rolled out (I got it on my HoloLens and my MR PC yesterday,and just got the mail processing has been completed). Make sure to check out all the other checkmarks you need to check – see once again my previous post - to prevent from disappointing your users.

Some little tricks I used

The Mixed Reality Toolkit has now reached a maturity level that you don’t have to do much in your app to support both scenarios. The most dramatic change is using a Mixed Reality Camera in stead of the good old HoloLensCamera. The new camera has separate quality settings for HoloLens and Immersive apps. For HoloLens, this is default set to fastest, the “Opaque Display Settings” (those are for Immersive apps) is set to “Fantastic”. I tend to crank up the HoloLens setting notch one or two (in this case one)

image

This results in considerably better graphics, but be very careful with that – you might hose your HoloLens app’s performance. So test thoroughly before messing with this setting.

Some little piece of code I use, stolen from inspired by Mike Taulty, to check if I am on a HoloLens or not:

public static class OpaqueDetector
{
    public static bool IsOpaque
    {
        get
        {
#if !UNITY_EDITOR
            if (Windows.Foundation.Metadata.ApiInformation.IsTypePresent(
                "Windows.Graphics.Holographic.HolographicDisplay"))
            {
               return Windows.Graphics.Holographic.HolographicDisplay.GetDefault()?.IsOpaque == true;
            }
#endif
            return false;
        }
    }
}

Turns out I could call HolographicDisplay in the newest SDK, but not in the old one. Using Mike’s trick allowed it to work. Which allows me do to do simple things as

SomeObject.SetActive(OpaqueDetector.IsOpaque)

This, for instance, I use to disable the MotionControllers prefab inside the MixedRealityCameraParent because I don’t want all that stuff to be initialized. I

image

I also use it to skip the first phase of the HoloLens app – where the user has to detect the floor.

Conclusion

If you are writing a HoloLens app now, especially if you think general market, it’s almost a no-brainer to support Immersive headsets now as well. Although HoloLens seems to do pretty well worldwide judging from my download numbers, adding an Immersive app has added a serious spike to those download numbers, and opened my app to a considerable wider audience.

Disclaimer: as with everything in this super fast moving space, this is how it works now. I think the plan is for Unity to release a unified editor that can generate both apps in the near future. But it’s now already so easy that you would be crazy not to go forward already.

One final tip: be sure to regularly test on both types of devices if you are serious about expanding to Immersive. A bit of code that works fine on one type may not work so hot on another – or sometimes not even compile. You still have to watch your step, but it’s magnitudes easier now than it was in era running up to the Fall Creators Update release.

Enjoy! I hope you build something awesome!

14 October 2017

Lessons learned from adapting Walk the World from pure HoloLens to Windows Mixed Reality

imageHeads-up (pun intended)

This is not my typical code-with-sample story - this a war story from the front line of Windows Mixed Reality development. How did I get here, what did I learn, what mistakes did I make, what scars I have to show, and how did I win in the end.

The end

On the evening (CET) of Tuesday, October 10, 2017, Kevin Gallo - VP of Windows Developer Platform - announced in London the release of the SDK for the Windows 10 Fall Creators Update and the opening of the Windows Store for apps targeting that release - including Mixed Reality apps. 

Mere hours after that - Thursday had just arrived in the Netherlands - an updated version of Walk the World with added support for Windows Mixed Reality passed certification, and became available for download in the store on Friday the 13th around 8:30pm CET. I was able to download, install and verify it was working as I expected. Four days before the actual official rollout of the FCU including the Mixed Reality portal, I was in the Store. Against all odds, I not only managed to make my app available, but also get it available as a launch title.

Achievement unlocked ;)

What happened before

On June 28th, 2017, I was invited to Unity Unite Europe 2017 by Microsoftie Desiree Lockwood who I met numerous times in the cause of becoming an MVP. Not having to fly 9 hours to meet an old friend but only to have to take a short hop on a train, I gladly accepted. On a whim, I decided to bring my HoloLens with Walk the World for HoloLens loaded on it. We had lunch and I demoed the app, showing Mount Rainier about 4 meters high, in a side building. That apparently made quite an impression. Talk moved quickly to the FCU Mixed Reality, and how much work it would be to make my app available for MR as well. In a very uncharacteristically moment of hubris I said "you get me a head set, I will get you this app". I got guidance on how to pitch my app, I did follow the instructions, and July 27th the head set arrived.

Suddenly it was time to make sure I lived up to my big words.Before

Challenge 1: hardware

My venerable old development PC, dating back to 2011, had a video card that in no way in Hades would be able to drive a headset. I could get a 2nd hand video card and the PC, running the Creators Update AKA RS2, said it was ready to rock. So I happily enabled a dual boot config, added the Fall Creators Update Insiders preview, and then ran into my first snag.

On the Creators Update the Mixed Reality portal is nothing more than a preview, the preview ran nice on my old PC, but the upcoming production version apparently not. Maybe I should have read this part of the Mixed Reality development documentation better. Not only the GPU did not cut it by far, but the CPU was way too old. So with a head set on the way, I was looking at this.

After

Fortunately, one of my colleagues is an avid gamer. She and her husband took a look at the specs, and built an amazing PC for me in a like days. It’s specs are:

  • CPU: AMD Ryzen 7 1700, 3.0 GHz (3,7 GHz Turbo Boost) socket AM4 processor
  • Graphics card: Gigabyte GeForce GTX 1070 G1 Gaming 8GB
  • Motherboard: ASUS PRIME B350-PLUS, socket AM4
  • Memory: Corsair 16 GB DDR4-3000
  • Storage: Crucial MX300 1TB M.2
  • Power supply: Seasonic Focus Plus Gold 650W

This is built into a Fractal Design Core 2500 Tower with an extra Fractal Design Silent Series R3 120mm Case fan. My involvement in the actual creation of this monster was supplying maximum physical dimensions and entering payment details. Software is my shtick, not hardware. But I can tell you this device runs Windows Mixed Reality like a charm, and very stable, too. Thanks Alexandra and Miles!

Lesson 1: RTFM, and then wait till the FM is indeed final before making assumptions.

Lesson 2: don’t skimp on hardware especially when you are aiming for development.

Challenge 2: tools in flux

Developing for Windows Mixed Reality in early August 2017 was a bit of a challenge. Five factors where in play:

  • The Fall Creators Update Insiders preview
  • The Mixed Reality Portal
  • Visual Studio 2017.x (a few updates came out during the timeframe)
  • Unity 2017 (numerous versions)
  • The HoloToolkit (halfway rechristened the Mixed Reality Toolkit) – or actually, the development branch for Mixed Reality.

Only when all these five stars aligned, things would actually work together. Only three of the stars were in Microsoft’s control – Unity is of course made by Unity, and the Mixed Reality Toolkit is an open source project only partially driven by Microsoft. Four of them were very much in flux. A new version comes out for one of these stars, and the whole constellation starts to wobble. Fun things I had to deal with were, amongst others:

  • imageFor quite some time, Unity could not generate Unity C# projects, but only ‘player’ projects. Which meant debugging was nearly impossible. But it also made it a fun second-guessing-the-compiler game, kind of like in the very old day before live debugging (yes kids, that’s how long I have been developing software). Effectively, this meant I had to leave the "Unity C# Projects" checkbox unchecked in the build settings, because it created something the compiler did not want - let alone it being deployable.
  • An update in Visual Studio 2017 made it impossible to run Unity generated projects unless you manually edited project.lock.json or downgraded Visual Studio (which, in the end, I did).
  • Apps ran only once for a while, then you had to reset the MR portal. Or only showed a black screen. Next time they ran flawlessly. This was caused by a video card driver crash. This was, in a matter of speaking, a 6th star in the constellation that fortunately quickly disappeared.
  • For a while, I could not start apps from Visual Studio. I could only start them from the start menu. And only from the desktop start menu. Not from the MR start menu.
  • The Mixed Reality version and the HoloLens version of the app got quite far out of sync at one point.

Lesson 3: on the bleeding edge is where you suffer pain. But you also get the biggest gain. And this is were the community’s biggest strengths come to light.

A tale of two tool sets

I wanted to move forward with Mixed Reality, but at the same time I wanted to maintain the integrity of the HoloLens version. So although the sources I wrote myself remained virtually the same, at one point the versions of Unity, the Mixed Reality Toolkit and even the Visual Studio versions I needed were different. For now I used for HoloLens development:

  • Visual Studio 2017 15.3.5
  • Unity 2017.1.1f1
  • The Mixed Reality Toolkit master branch.

For Mixed Reality development I used:

  • Visual Studio 2017 15.4.0 preview 5
  • Unity 2017.2.0f1
  • The Mixed Reality Toolkit Dev_Unity_2017.2.0 branch

Why a Visual Studio Preview? Well that particular preview contained the 16299 SDK (as fellow MVP Oren Novotny pointed out in a tweet), and although I did not know for sure 16299 would be the FCU indeed, I decided to go for it. Late afternoon (CET) of Sunday, October 8, I built the package, pressed it trough the WACK, and submitted it. And as I wrote before, it sneaked through shortly after the Store was declared open, becoming an unplanned Mixed Reality release title. Unplanned by Microsoft, that is. It was definitely planned by me. ;)

In the mean time, things are till changing, see this screen shot from the HoloDeveloper Slack group,which I highly recommend joining, especially the immersive_hmd_info and mrtoolkit_holotoolkit channels as these give a lot of up-to-date information on the five-star-constellation changes and wobbles:

image

Lesson 4: Keep tightly track of your tool versions

Lesson 5: Join the HoloDeveloper slack channel (and this means something from a self-proclaimed NOT fan of Slack;) )

A tale of two source trees

As I already mentioned, I needed to use two versions of the Mixed Reality Toolkit. These are distributed in the form on Unity packages, which means they insert themselves into the source of your app, as source. It’s not like you reference an assembly or a NuGet package. This had a kind of nasty consequence – if I wanted to move forward and keep my HoloLens app intact for the moment, I had to make a a separate branch for Mixed Reality development, which was exactly what I did. So although the sources I wrote for my app are virtually the same, there was a different toolkit in my sources. Wow, did Microsoft mess up this one, right?

No. Not at all. Think with me.

  • I have a master branch that is based upon the master branch of the Mixed Reality Toolkit – this contains my HoloLens app
  • I have an MR branch based upon the Dev_Unity_2017.2.0 branch – this is the Mixed Reality variant. In this branch sits all the intelligence that makes the app work on a HoloLens and an immersive headset.
  • At one point the stars will align to a point where I can use one version of everything (most notably, Unity, which keeps on being a wild card in this constellation) to generate an app that will work on all devices. Presumably the Mixed Reality Dev_Unity_2017.2.0 branch will become the master branch. Then I will not merge to my master branch – that will be deleted. My MR branch will be based upon the latest stuff and will become the source of everything.

Changes in code and Unity objects

Preprocessing directives

In the phase that I could not create Unity C# projects - hence debuggable projects - it seemed to me that that UWP code within #if UNITY_UWP compiler directives did not get to be executed in the not-debuggable projects that were the only thing I could generate. Peeking in the HoloToolkit - I beg your pardon - Mixed Reality Toolkit I saw the all UNITY_UWP compiler directives were gone, and several others were used. I tried WINDOWS_UWP and lo and behold - it worked. Wanting to keep backwards compatibility I changed all the #if UNITY_UWP directives to #if UNITY_UWP || WINDOWS_UWP. I am not really sure it's still necessary - looking in the Visual Studio solution build configuration now, I see both conditionals defined. I decided to leave it there.

Camera

Next to the tried and trusted HoloLensCamera, there's now the MixedRealityCamera. This also includes support for controllers and stuff. What you need to do is to disable (or remove) the HoloLensCamera and add a MixedRealityCameraParent:

image

This includes the actual camera, the in-app-controller display (just like in the Cliff House, and it looks really cool) as well as default floor - a kind of bluish square that appears on ground level. I thinks it's apparent size is about 7x7 meters, but I did not check. As Walk the World has it's own 'floor' - a 6.5x6.5 meter map, I did not need that so I disabled that portion.

Runtime headset checking - for defining the floor

I am not sure about this one - but when running a HoloLens app, position (0,0,0) is the place where the HoloLens is at the start of the app. That is why my Walk the World for HoloLens starts up with a prompt for you to identify the floor. That way, I can determine your length and decide how much below your head I need to place the map to make it appear on the floor. Simply a matter of sending a Raycast, having it intersect with the Spatial Mapping at least 1 meter below the user's head, and go from there. I will blog about this soon. In fact, I have already started doing so, but then this came around.

First of all, we don't have Spatial Mapping in an immersive headset. But experimenting I found out that (0,0,0) is not the user's head position but apparently the floor directly beneath the headset on startup. This makes life a whole lot easier. I just check

if(Windows.Graphics.Holographic.HolographicDisplay.GetDefault().IsOpaque)

then skip the whole floor finding experience, make the initial map at (0,0,0) and I am done.

Stupid tight loops

In my HoloLens app I got away with calling this on startup.

private bool CheckAllowSomeUrl()
{
    var checkLoader = new WWW("http://someurl");
    while(!checkLoader.isDone);
    return checkLoader.text == "true";
}

This worked, as it was in the class that was used to build the map. In the HoloLens app was not used until the user had defined the floor so it had plenty of time to do it's thing. Now, this line was called almost immediately after app startup and the whole thing got stuck in a tight loop and I only got a black screen.

In the mean time, I have upgraded the Unity version that builds the HoloLens app from 5.6x to 2017.1.x and this problem occurs there now as well. Yeah, I know it's stupid. I wrote this quite some time, it worked, and I forgot about it. Thou shalt use yield. Try to pinpoint this while you cannot debug.

Skybox

A HoloLens app has a black Skybox, as it does not need to generate an virtual environment - it's environment is reality. An immersive head set does not have that, so in order to prevent the user having the feeling to float in and empty dark space, you have to provide for some context. Now Unity has a default Skybox, but according to a Microsoft friend who helped me out, using the default Skybox is Not A Good Thing and the hallmark of low quality apps. Since I only ever made HoloLens apps, this never occurred to me. With the aid of the HoloDeveloper slack channel I selected this package of Skyboxes and selected the HaloSky, which gives a nice half-overcast sky.

Coming from HoloLens, having never had to bother with Skyboxes before, you can spend quite some time looking for how the hell you are supposed to set one. I am assume it's all very logical for Unity buffs, but the fact is that you don't have to look in the Scene or the Camera - the most logical place to look after all - but you have to select Windows/Lighting/Settings from the main window. That will give a popup where you can drag the Skybox material in.

image

You can find this in the documentation, in this page that is titled "How do I Make a Skybox?" but since I did not want to make one, just use one it took me a while to find it. I find this confusing wording rather typical for Unity documentation. The irony is that the page itself is called "HOWTO-UseSkybox.html"

Upgrading can be fun - but not always

At one point I had to upgrade from Unity 5.6.x to 2017.1.x and later 2017.2.x. I have no idea what exactly happened and how, but at some point some settings I had changed from default in some of my components in the Unity editor got reverted to default. This was fortunately easy to track down with a diff using TortoiseGit. I also noticed my Store Icon got reverted to it's default value - no idea why or how, but still.

In the cause of upgrading, you will also notice some name spaces have changed in Unit. For instance, everything that used to be in UnityEngine.VR.WSA in now in UnityEngine.XR.WSA. Similar things happened in the Mixed Reality Toolkit. For reasons I don't quite understand, using a TextToSpeechManager can now only be used by calling from the main thread. For extra fun, in a later release it's name changed into TextToSpeech (sans "Manager") and the method name changed a little too.

Submitting for multiple device families

Having only submitted either all-device-type supported UWP apps or Hololens apps that, well, only ran on HoloLens, I was a bit puzzled how to go about making various packages for separate device families. I wanted to have an x86 based package for HoloLens, and an x86/x64 package for Windows Mixed Reality. I actually built those on different machine and I also gave them different version numbers.

image

But whatever I tried, I could not get this to work. If I checked both the checkbox for Holographic and Windows, the portal said it would offer both packages on both platforms depending on capabilities. I don't know if that would have caused any problems, but I got a tip from my awesome friend Matteo Pagani that I should dig into the Package.appxmanifest manually.

In my original Package.appmanifest it said:

<Dependencies>
<TargetDeviceFamily Name="Windows.Universal" MinVersion="10.0.10240.0" 
                    MaxVersionTested="10.0.15063.0" />
</Dependencies>

For my HoloLens app, I changed that into

<Dependencies>
<TargetDeviceFamily Name="Windows.Holographic" MinVersion="10.0.10240.0" 
                    MaxVersionTested="10.0.15063.0" />
</Dependencies>

For my Mixed Reality app, I changed that into

<Dependencies>
<TargetDeviceFamily Name="Windows.Desktop" MinVersion="10.0.16299.0" 
                    MaxVersionTested="10.0.16299.0" />
</Dependencies>

And then I got the results I wanted, and I was absolutely sure the right packages were offered to the right (and capable) devices only.

Setting some more submission options

From the same Microsoft friend who alerted me to my Skybox issues I also got some hints on how to submit a proper Mixed Reality head set app. There were a lot of options I was not ever aware of. Under "Properties", for instance, I set this

image

as well as this under "System requirements" (left is minimum hardware, right is recommended hardware)

image

Actually, you should set a lot more settings considering the minimal specs for the PC. Detailed instructions can be found here, including the ones I just discussed ;)

Conclusion

It was a rocky ride but a fun one too. I spent an insane amount of time wrestling with unfinished tools, but seeing my app work on the Mixed Reality headset for the very first time was an adrenaline high I will not forget easily. Even better was the fact I managed to sneak in the back door to get my app in the Store ready for the Fall Creators Update launch - that was a huge victory.

In the end, I did it all myself, but I could not have gotten there without the help of all the people I already mentioned, not to mention some heroes from the Slack Channel, particularly Lance McCarthy and Jesse McCulloch who were always there to get me unstuck.

In hindsight, Mixed Realty development is not harder than HoloLens development. In fact, I'd call it easier because you are not constrained by device limits, deployment and testing goes faster, and the Mixed Reality Toolkit evolved to a point where things get insanely easy. Nearly all my woes where caused by my stubborn determination to be there right out of the gate, so I had to use tools that still had severe issues. Now stuff is fleshed out, there's not nearly as much pain. The fun things is, when all is said and done, HoloLens apps and Mixed Reality apps are very much the same. Microsoft vision for a platform for 3D apps is really becoming true. You can re-use your HoloLens knowledge for Mixed Reality - and vice versa. Which brings us to:

Lesson 6: if you thought HoloLens development was too expensive for you, get yourself a headset and a PC to go with it. It's insanely fun and a completely new, exiting and nearly empty market is waiting for you!

Enjoy!