Xamarin macOS Binding Libraries

In creating an IOBluetooth binding for Xamarin Mac I learned about Objective Sharpie and binding libraries. There is little documentation on this but it is fairly similar to Xamarin iOS and for that there is a lot more source material. The output from Objective Sharpie gives you binding definitions which you can use in a dll project to produce a binding library you can then call from any other Xamarin dll or app. This was fine to a point but there are issues with some of the complex types used and these cannot always be marshalled automatically. This left me with an API with a few missing bits as I tried and failed to manually adjust the binding via trial and error.

The project sat around untouched for some time but recently I’ve begun to revive it and hope to sort out these bits so it can be released as a complete functioning API for Xamarin Mac. At first I thought I was going to have to create two libraries – one with the raw API calls and another with a clean API over the top but I had missed something buried in the docs and it turns out there is an easier solution.

When you have a binding library it will, by default, have two files – ApiDefinition.cs which contains all the API calls and has a Build action of BindingApiDefinition and a StructsAndEnums.cs which contains (well I’ll let you guess from the name) and this has a Build action of ObjcBindingCoreSource. When the classes are generated from the interface definitions in ApiDefinition.cs they are actually partial classes. This means you can extend them and have additional functionality built cleanly into the library. If you have a particularly messy API call you can mark it as internal and then surface it in a more friendly way from a partial class. To do this add another source file to the library project (I’ve called it Extra.cs because I saw that in a sample but the name isn’t important) and set the Build action to Compile. Here you’ll need to create a partial class with the same name and namespace as the “interface” you want to extend from ApiDefinition.cs, and then add methods, properties etc.

The first time I added this my build failed. I subsequently found out that there is one additional step to tell the binding compiler to ignore this file. Open the project properties, under Build select the Objective-C Binding Build page. Here in Additional btouch arguments box add -x:Extra.cs (or replace with your own filenames). This stops the initial binding compilation from using the partial class, which then gets built normally in the subsequent managed code build. The project should now build and expose the combined functionality. I did find that intellisense often gets confused when editing the partial class because there isn’t another definition of a partial class at this time (remember in ApiDefinition.cs it’s actually an interface). However it seems you can safely ignore this!

This in theory allows you to completely change the API surface which you expose to Xamarin from whatever you started with. I don’t want to go too crazy with IOBluetooth – my feeling is that it should match the native API with a few tweaks for C# naming conventions, using namespaces rather than huge class names, and .NET friendly types where appropriate. Objective Sharpie struggled with some of the enum/constant definitions and so these still require a bit of massaging. It should be obvious how it maps to the native API.

If you have feedback on the API or would like to get involved in getting the library up to release standard please let me know. All the current code is on GitHub in the IOBluetooth and IOBluetoothUI folders.

Update to Xamarin Forms MediaElement

Things are progressing with the Xamarin Forms Pull Request but it’s a big change and I’ve had feedback on how quickly (or not) it’s going. For this reason I’ve decided to post an update to InTheHand.Forms to port some of the advances and compile for Xamarin Forms 4.0. The new release is marked as a pre-release to avoid anyone automatically updating at this stage and hitting problems because there are a few API changes to ease the transition to the future official version.

https://www.nuget.org/packages/InTheHand.Forms/2.0.2019.613-pre

It takes advantage of fast renderers on iOS and Android and has a cleaner communication method between the control and the renderers. I haven’t changed the Source property which is still based on UWP style Uris for special file locations. I haven’t added a WPF renderer even though the Xamarin Forms version will have this (minus some missing platform functionality like HTTPS and overlaid media playback controls). There are some improvements to the API including a Volume property. The original PCL library is removed along with Windows 8.1 and Windows Phone platforms. I will probably continue to use this package as a method to test other renderers (e.g. Mac and Tizen) before embarking on another Pull Request to Xamarin Forms.

I think the plan with the Xamarin Forms pull request was to integrate in 4.1 but it hasn’t progressed into the current preview releases. I’ll keep you posted as I know more.

32feet.NET and Audio

There are a few different Bluetooth profiles which handle audio, but they all work in a very similar way. There are two connections open between the client (usually a phone) and the server (some kind of audio device such as a speaker or car entertainment system).

The first of these is an Rfcomm channel which handles commands between the devices. Rfcomm is essentially a serial connection emulated over Bluetooth and these commands are often a mixture of AT commands from the world of modems and other commands for associated functionality (think phone book contacts, track names etc).

The second channel is a low level SCO (Synchronous Connection-Oriented) connection which is better suited to real-time audio data. Depending on the profile this may be used for one-way (audio) or two-way (hands-free etc) audio.

32feet.NET has only support for Rfcomm out of the box. This means it is possible to establish a connection to a headset device and even do things like capture button presses and send rings but it does not support opening an audio channel. Also if you connect to a headset device or similar rather than use the platform’s built in support you’ll block the device from using its native functionality. Mobile devices have support and drivers for headset/hands-free etc and this will go through the normal audio APIs on the platform so there is rarely a need to try and interfere with this.

If you want your app to play audio over Bluetooth then pair the device with the OS and just play audio and the system will handle it for you.

Alexa Skill with Azure Functions – Messaging

In the previous Alexa post I talked about building a List skill to integrate with a third-party list provider. This gives you a mechanism to react to changes in Alexa’s lists and write them to your external provider, but what about implementing a two-way sync?

When you setup account linking for your skill the user goes through an OAuth flow to authorise your app and this returns a token and a refresh token to Amazon. The Alexa infrastructure manages this securely and handles the refresh process for you. Therefore only your skill function can continue to access your third-party service. Therefore when you have a callback mechanism from your third-party provider you need a way of passing the change information into your skill to be processed. Luckily there is a messaging service to do this.

As with the list functionality there is a library to handle the messaging requests – Alexa.NET.SkillMessaging. The code you use to send the message will have to have the client id and client secret of your skill – you can find these from the Alexa Developer Console on the web.

var client = new AccessTokenClient(AccessTokenClient.ApiDomainBaseAddress);
var accessToken = await client.Send(clientId, clientSecret);

This access token can then be used to send messages to your skill. Each message consists of a payload which is a Dictionary<string,string> and a timeout. You create an SkillMessageClient and send the message to a specified user id. The Amazon user id is given to you when your skill is first enabled and the account is linked. The id is specific to the skill and cannot be used to personally identify a user.

var payload = new Dictionary<string, string> { { "Key", "Some Value" } };
var messageClient = new Alexa.NET.SkillMessageClient(alexaEndpoint + "/v1/skillmessages/users/", accessToken.Token);
var messageToSend = new Alexa.NET.SkillMessaging.Message(payload, 3600);
var messageId = await messages.Send(messageToSend, userId);

An extra complication is that there are multiple API endpoints for the SkillMessageClient depending on the region. This means you’ll have to store the endpoint along with the alexa userid so you know which to use for a specific user. If successful a unique id for the message is returned. In your skill code you have to enable support to recognise the incoming message and handle the action. In the case of a list change event from a third-party provider this would be to load the specific changed item and then write the values to the Alexa list.

As with the list support we need to register the messaging library so that the skill request can be correctly deserialised into a MessageReceivedRequest.

RequestConverter.RequestConverters.Add(new MessageReceivedRequestTypeConverter());

Then when reading your incoming request you can check the request type and add code to process the message. The MessageReceivedRequest contains a Message property with the dictionary of values sent from your other function. The user id is already included with all incoming requests in the Context.System.User.UserId property.

Combining this with the list support already discussed you can see how to use the ListManagement API to write changes into the Alexa lists.

WebAuthenticationBroker and GitHub

WebAuthenticationBroker is a component of Windows 10 which facilitates Oauth authentication with services from a client app. It handles the presentation and navigation of the authentication pages and returns control to your app along with a returned token or an error code. It’s a UWP API and integrates neatly with modern apps, however what may not be obvious is that it doesn’t use the Edge browser like a WebView control but instead uses the legacy Internet Explorer browser.

This is a problem because it’s old and a bit creaky and some sites don’t work with it, or actively refuse to – one of these is GitHub. If you try to authenticate with GitHub using it you’ll see a big ugly banner asking you to use a modern browser and you’ll be stuck. Hopefully this will change and the API will move to Edge (or even Chromium powered Edge which is on the horizon). However in the mean time you’ll need to roll your own solution. There are lots of ways you could do this by using the WebView control in your app and handling navigation events but I thought I’d try and recreate the same API so that I had a solution I could swap in without major changes. The result of this work is called, and excuse me it was very late when I thought of it, Authful. The full code is online in GitHub here. I haven’t released it as a NuGet library and won’t unless there is sufficient interest as I thought people might just prefer the code so that they can customise the look and feel for their own apps. There is a sample project to show how to call the API (hint: It’s the same as the UWP class). It doesn’t support the advanced options but it should work with most mainstream OAuth-based APIs.

https://github.com/inthehand/Authful

Alexa List Skills with Azure Functions

When  I was building my Microsoft To-do Alexa skill I found Matteo’s series of blog posts on Alexa + Azure very useful. However I needed to go beyond the functionality described there and knew I’d need to delve deeper into the Alexa Skills Kit documentation. The first item I thought I’d blog about is the Alexa List Management API. I’m not going to open source the skill but I thought I’d share some pointers which could be useful to other developers.

What is a List Skill?

Alexa has built in lists for To-do items and Shopping List. These are quite basic (there are no due dates or reminders for example) but a third-party can extend them to synchronise them with another back-end. When you create a list skill you aren’t providing Alexa with an API into your datastore, but rather are taking responsibility to maintain synchronisation between two copies of the list. Your list items could be updated either from Alexa via the web or app or, more commonly, via a user’s voice command. To support this your skill subscribes to a number of list events which fire whenever an item is added, modified or deleted. Likewise on your backend you’ll need to handle changes and communicate them back to your Alexa skill. I’ll discuss this messaging infrastructure in a future post.

In its simplest form a List skill doesn’t have to have any speech interface of its own. It just has to handle the standard list events and require read (and likely write) permission on the Alexa household lists.

Building a List Skill

If your skill isn’t a regular custom skill you can’t edit the manifest through the Alexa Skill Kit dashboard and so you would use the Alexa Skill Kit command line tools. However rather than suffer the misery of a command prompt you can use trusty Visual Studio Code and the official Alexa Skills Kit Toolkit extension. This allows you to edit your skill metadata in Visual Studio Code’s editor and use the command palette to perform common operations like deploying the skill. The metadata for a skill is expressed in json and the editor has intellisense for the schema. A manifest for a list skill must contain an events object containing a list of standard event types:-

"events":{
      "endpoint":{
        "uri":"https://YourAzureFunctionAppEndpoint/api/FunctionName",
        "sslCertificateType": "Wildcard"
      },
      "subscriptions":[
       {
         "eventName": "SKILL_ENABLED"
       },
       {
         "eventName": "SKILL_DISABLED"
       },
       {
         "eventName": "SKILL_PERMISSION_ACCEPTED"
       },
       {
        "eventName": "SKILL_PERMISSION_CHANGED"
       },
       {
        "eventName": "SKILL_ACCOUNT_LINKED"
       },
       {
        "eventName": "ITEMS_CREATED"
       },
       {
        "eventName": "ITEMS_UPDATED"
       },
       {
        "eventName": "ITEMS_DELETED"
       },
       {
        "eventName": "LIST_CREATED"
       },
       {
        "eventName": "LIST_UPDATED"
       },
       {
        "eventName": "LIST_DELETED"
       }
      ]
    },
Also in order to be able to query the list items and write changes you must also request permissions. You’ll need to be careful as the user can revoke these:-
    "permissions": [
      {
        "name": "alexa::household:lists:read"
      },
      {
        "name": "alexa::household:lists:write"
      }
    ]
The endpoint you defined for events will receive all the event types so you’ll need to write code to read the event type and react accordingly. Now this is where we step into the unknown – the list events require a different request type than is covered with the Alexa.NET library. The good news is there are already a range of companion NuGet packages for specific Alexa APIs and Alexa.Net.ListManagement has what we need here.
For each of these libraries we have to add a line of code at the beginning of our function to tell the main Alexa.NET library how to deserialise the request. For list management this is:-
RequestConverter.RequestConverters.Add(new ListSkillEventRequestTypeConverter());
            
Then from our deserialised SkillRequest we can get the type to determine the type of request which is a specific type depending on the list event e.g. ListSkillItemCreatedRequest. The body of this request will contain a list id which may represent the To-Do list, Shopping List or a custom list. It will also contain one or more list item ids for the newly created item(s).

Modifying List Items

The other part of the ListManagement library is the ListManagementClient. This provides access to read and write list items and wraps the REST API. The constructor takes an access token which is passed to your skill in the skill request’s Context.System.ApiAccessToken property. With this (assuming you are granted the required permissions) you can query all list metadata and create, modify and delete list items. However these are operations you’ll mainly do triggered by changes in your linked back-end so you can keep Alexa’s copy of the list in sync with your own data.
In the next post I’ll look at how to implement messaging so that your own system can send a message to your skill to update items…