Xamarin Forms MediaElement now available on WPF

Continuing from my last post on macOS, there is now a MediaElement renderer for WPF in InTheHand.Forms 2.0.2019.913. The native control in WPF is a lot more limited and is missing two big features – no system provided transport controls, and no support for HTTPS sources. The latter is an unfortunate limitation though it looks as though this is being fixed for WPF on .NET Core.

A possible workaround for these limitations is to have the WPF renderer use the UWP MediaPlayerElement when running on Windows 10 1903 and above. This is currently available as a WPF control as part of the Windows Community Toolkit.

Xamarin Forms MediaElement now available for macOS

As the Pull Request to Xamarin Forms continues in its holding pattern I thought about what else I can do with it. I had neglected the original codebase for a while but I’ve spent some more time with it recently providing some much needed updates. The latest which was added to NuGet today adds a whole new platform – macOS.

The functionality is almost exactly the same as that available on iOS with the exception of the KeepScreenOn property as I haven’t found the equivalent macOS APIs to implement this. The sample app code has been updated to include a macOS version demonstrating the basic functionality including toggling different Aspect settings.

Other recent updates include moving the dependencies to a more recent Xamarin Forms version and implementing fast renderers for iOS and Android.

I’ve begun back-porting the WPF renderer too and hope to add this soon. This will increase the platform coverage to Android, iOS, UWP, macOS and WPF. I haven’t really worked with Xamarin Forms on Tizen but this is another possibility for the future.

https://www.nuget.org/packages/InTheHand.Forms/

Xamarin macOS Binding Libraries

In creating an IOBluetooth binding for Xamarin Mac I learned about Objective Sharpie and binding libraries. There is little documentation on this but it is fairly similar to Xamarin iOS and for that there is a lot more source material. The output from Objective Sharpie gives you binding definitions which you can use in a dll project to produce a binding library you can then call from any other Xamarin dll or app. This was fine to a point but there are issues with some of the complex types used and these cannot always be marshalled automatically. This left me with an API with a few missing bits as I tried and failed to manually adjust the binding via trial and error.

The project sat around untouched for some time but recently I’ve begun to revive it and hope to sort out these bits so it can be released as a complete functioning API for Xamarin Mac. At first I thought I was going to have to create two libraries – one with the raw API calls and another with a clean API over the top but I had missed something buried in the docs and it turns out there is an easier solution.

When you have a binding library it will, by default, have two files – ApiDefinition.cs which contains all the API calls and has a Build action of BindingApiDefinition and a StructsAndEnums.cs which contains (well I’ll let you guess from the name) and this has a Build action of ObjcBindingCoreSource. When the classes are generated from the interface definitions in ApiDefinition.cs they are actually partial classes. This means you can extend them and have additional functionality built cleanly into the library. If you have a particularly messy API call you can mark it as internal and then surface it in a more friendly way from a partial class. To do this add another source file to the library project (I’ve called it Extra.cs because I saw that in a sample but the name isn’t important) and set the Build action to Compile. Here you’ll need to create a partial class with the same name and namespace as the “interface” you want to extend from ApiDefinition.cs, and then add methods, properties etc.

The first time I added this my build failed. I subsequently found out that there is one additional step to tell the binding compiler to ignore this file. Open the project properties, under Build select the Objective-C Binding Build page. Here in Additional btouch arguments box add -x:Extra.cs (or replace with your own filenames). This stops the initial binding compilation from using the partial class, which then gets built normally in the subsequent managed code build. The project should now build and expose the combined functionality. I did find that intellisense often gets confused when editing the partial class because there isn’t another definition of a partial class at this time (remember in ApiDefinition.cs it’s actually an interface). However it seems you can safely ignore this!

This in theory allows you to completely change the API surface which you expose to Xamarin from whatever you started with. I don’t want to go too crazy with IOBluetooth – my feeling is that it should match the native API with a few tweaks for C# naming conventions, using namespaces rather than huge class names, and .NET friendly types where appropriate. Objective Sharpie struggled with some of the enum/constant definitions and so these still require a bit of massaging. It should be obvious how it maps to the native API.

If you have feedback on the API or would like to get involved in getting the library up to release standard please let me know. All the current code is on GitHub in the IOBluetooth and IOBluetoothUI folders.

Update to Xamarin Forms MediaElement

Things are progressing with the Xamarin Forms Pull Request but it’s a big change and I’ve had feedback on how quickly (or not) it’s going. For this reason I’ve decided to post an update to InTheHand.Forms to port some of the advances and compile for Xamarin Forms 4.0. The new release is marked as a pre-release to avoid anyone automatically updating at this stage and hitting problems because there are a few API changes to ease the transition to the future official version.

https://www.nuget.org/packages/InTheHand.Forms/2.0.2019.613-pre

It takes advantage of fast renderers on iOS and Android and has a cleaner communication method between the control and the renderers. I haven’t changed the Source property which is still based on UWP style Uris for special file locations. I haven’t added a WPF renderer even though the Xamarin Forms version will have this (minus some missing platform functionality like HTTPS and overlaid media playback controls). There are some improvements to the API including a Volume property. The original PCL library is removed along with Windows 8.1 and Windows Phone platforms. I will probably continue to use this package as a method to test other renderers (e.g. Mac and Tizen) before embarking on another Pull Request to Xamarin Forms.

I think the plan with the Xamarin Forms pull request was to integrate in 4.1 but it hasn’t progressed into the current preview releases. I’ll keep you posted as I know more.

32feet.NET and Audio

There are a few different Bluetooth profiles which handle audio, but they all work in a very similar way. There are two connections open between the client (usually a phone) and the server (some kind of audio device such as a speaker or car entertainment system).

The first of these is an Rfcomm channel which handles commands between the devices. Rfcomm is essentially a serial connection emulated over Bluetooth and these commands are often a mixture of AT commands from the world of modems and other commands for associated functionality (think phone book contacts, track names etc).

The second channel is a low level SCO (Synchronous Connection-Oriented) connection which is better suited to real-time audio data. Depending on the profile this may be used for one-way (audio) or two-way (hands-free etc) audio.

32feet.NET has only support for Rfcomm out of the box. This means it is possible to establish a connection to a headset device and even do things like capture button presses and send rings but it does not support opening an audio channel. Also if you connect to a headset device or similar rather than use the platform’s built in support you’ll block the device from using its native functionality. Mobile devices have support and drivers for headset/hands-free etc and this will go through the normal audio APIs on the platform so there is rarely a need to try and interfere with this.

If you want your app to play audio over Bluetooth then pair the device with the OS and just play audio and the system will handle it for you.

Alexa Skill with Azure Functions – Messaging

In the previous Alexa post I talked about building a List skill to integrate with a third-party list provider. This gives you a mechanism to react to changes in Alexa’s lists and write them to your external provider, but what about implementing a two-way sync?

When you setup account linking for your skill the user goes through an OAuth flow to authorise your app and this returns a token and a refresh token to Amazon. The Alexa infrastructure manages this securely and handles the refresh process for you. Therefore only your skill function can continue to access your third-party service. Therefore when you have a callback mechanism from your third-party provider you need a way of passing the change information into your skill to be processed. Luckily there is a messaging service to do this.

As with the list functionality there is a library to handle the messaging requests – Alexa.NET.SkillMessaging. The code you use to send the message will have to have the client id and client secret of your skill – you can find these from the Alexa Developer Console on the web.

var client = new AccessTokenClient(AccessTokenClient.ApiDomainBaseAddress);
var accessToken = await client.Send(clientId, clientSecret);

This access token can then be used to send messages to your skill. Each message consists of a payload which is a Dictionary<string,string> and a timeout. You create an SkillMessageClient and send the message to a specified user id. The Amazon user id is given to you when your skill is first enabled and the account is linked. The id is specific to the skill and cannot be used to personally identify a user.

var payload = new Dictionary<string, string> { { "Key", "Some Value" } };
var messageClient = new Alexa.NET.SkillMessageClient(alexaEndpoint + "/v1/skillmessages/users/", accessToken.Token);
var messageToSend = new Alexa.NET.SkillMessaging.Message(payload, 3600);
var messageId = await messages.Send(messageToSend, userId);

An extra complication is that there are multiple API endpoints for the SkillMessageClient depending on the region. This means you’ll have to store the endpoint along with the alexa userid so you know which to use for a specific user. If successful a unique id for the message is returned. In your skill code you have to enable support to recognise the incoming message and handle the action. In the case of a list change event from a third-party provider this would be to load the specific changed item and then write the values to the Alexa list.

As with the list support we need to register the messaging library so that the skill request can be correctly deserialised into a MessageReceivedRequest.

RequestConverter.RequestConverters.Add(new MessageReceivedRequestTypeConverter());

Then when reading your incoming request you can check the request type and add code to process the message. The MessageReceivedRequest contains a Message property with the dictionary of values sent from your other function. The user id is already included with all incoming requests in the Context.System.User.UserId property.

Combining this with the list support already discussed you can see how to use the ListManagement API to write changes into the Alexa lists.