WebAuthenticationBroker and GitHub

WebAuthenticationBroker is a component of Windows 10 which facilitates Oauth authentication with services from a client app. It handles the presentation and navigation of the authentication pages and returns control to your app along with a returned token or an error code. It’s a UWP API and integrates neatly with modern apps, however what may not be obvious is that it doesn’t use the Edge browser like a WebView control but instead uses the legacy Internet Explorer browser.

This is a problem because it’s old and a bit creaky and some sites don’t work with it, or actively refuse to – one of these is GitHub. If you try to authenticate with GitHub using it you’ll see a big ugly banner asking you to use a modern browser and you’ll be stuck. Hopefully this will change and the API will move to Edge (or even Chromium powered Edge which is on the horizon). However in the mean time you’ll need to roll your own solution. There are lots of ways you could do this by using the WebView control in your app and handling navigation events but I thought I’d try and recreate the same API so that I had a solution I could swap in without major changes. The result of this work is called, and excuse me it was very late when I thought of it, Authful. The full code is online in GitHub here. I haven’t released it as a NuGet library and won’t unless there is sufficient interest as I thought people might just prefer the code so that they can customise the look and feel for their own apps. There is a sample project to show how to call the API (hint: It’s the same as the UWP class). It doesn’t support the advanced options but it should work with most mainstream OAuth-based APIs.

https://github.com/inthehand/Authful

Alexa List Skills with Azure Functions

When  I was building my Microsoft To-do Alexa skill I found Matteo’s series of blog posts on Alexa + Azure very useful. However I needed to go beyond the functionality described there and knew I’d need to delve deeper into the Alexa Skills Kit documentation. The first item I thought I’d blog about is the Alexa List Management API. I’m not going to open source the skill but I thought I’d share some pointers which could be useful to other developers.

What is a List Skill?

Alexa has built in lists for To-do items and Shopping List. These are quite basic (there are no due dates or reminders for example) but a third-party can extend them to synchronise them with another back-end. When you create a list skill you aren’t providing Alexa with an API into your datastore, but rather are taking responsibility to maintain synchronisation between two copies of the list. Your list items could be updated either from Alexa via the web or app or, more commonly, via a user’s voice command. To support this your skill subscribes to a number of list events which fire whenever an item is added, modified or deleted. Likewise on your backend you’ll need to handle changes and communicate them back to your Alexa skill. I’ll discuss this messaging infrastructure in a future post.

In its simplest form a List skill doesn’t have to have any speech interface of its own. It just has to handle the standard list events and require read (and likely write) permission on the Alexa household lists.

Building a List Skill

If your skill isn’t a regular custom skill you can’t edit the manifest through the Alexa Skill Kit dashboard and so you would use the Alexa Skill Kit command line tools. However rather than suffer the misery of a command prompt you can use trusty Visual Studio Code and the official Alexa Skills Kit Toolkit extension. This allows you to edit your skill metadata in Visual Studio Code’s editor and use the command palette to perform common operations like deploying the skill. The metadata for a skill is expressed in json and the editor has intellisense for the schema. A manifest for a list skill must contain an events object containing a list of standard event types:-

"events":{
      "endpoint":{
        "uri":"https://YourAzureFunctionAppEndpoint/api/FunctionName",
        "sslCertificateType": "Wildcard"
      },
      "subscriptions":[
       {
         "eventName": "SKILL_ENABLED"
       },
       {
         "eventName": "SKILL_DISABLED"
       },
       {
         "eventName": "SKILL_PERMISSION_ACCEPTED"
       },
       {
        "eventName": "SKILL_PERMISSION_CHANGED"
       },
       {
        "eventName": "SKILL_ACCOUNT_LINKED"
       },
       {
        "eventName": "ITEMS_CREATED"
       },
       {
        "eventName": "ITEMS_UPDATED"
       },
       {
        "eventName": "ITEMS_DELETED"
       },
       {
        "eventName": "LIST_CREATED"
       },
       {
        "eventName": "LIST_UPDATED"
       },
       {
        "eventName": "LIST_DELETED"
       }
      ]
    },
Also in order to be able to query the list items and write changes you must also request permissions. You’ll need to be careful as the user can revoke these:-
    "permissions": [
      {
        "name": "alexa::household:lists:read"
      },
      {
        "name": "alexa::household:lists:write"
      }
    ]
The endpoint you defined for events will receive all the event types so you’ll need to write code to read the event type and react accordingly. Now this is where we step into the unknown – the list events require a different request type than is covered with the Alexa.NET library. The good news is there are already a range of companion NuGet packages for specific Alexa APIs and Alexa.Net.ListManagement has what we need here.
For each of these libraries we have to add a line of code at the beginning of our function to tell the main Alexa.NET library how to deserialise the request. For list management this is:-
RequestConverter.RequestConverters.Add(new ListSkillEventRequestTypeConverter());
            
Then from our deserialised SkillRequest we can get the type to determine the type of request which is a specific type depending on the list event e.g. ListSkillItemCreatedRequest. The body of this request will contain a list id which may represent the To-Do list, Shopping List or a custom list. It will also contain one or more list item ids for the newly created item(s).

Modifying List Items

The other part of the ListManagement library is the ListManagementClient. This provides access to read and write list items and wraps the REST API. The constructor takes an access token which is passed to your skill in the skill request’s Context.System.ApiAccessToken property. With this (assuming you are granted the required permissions) you can query all list metadata and create, modify and delete list items. However these are operations you’ll mainly do triggered by changes in your linked back-end so you can keep Alexa’s copy of the list in sync with your own data.
In the next post I’ll look at how to implement messaging so that your own system can send a message to your skill to update items…

Xamarin Forms Fast Renderers – Part 2 Android

Following on from Part 1, this post will briefly discuss the Android approach to Fast Renderers. Again there isn’t really any documentation for control builders, but there are examples within the Xamarin Forms source to work from. Xamarin Android like iOS uses an IVisualElementRenderer interface which is very similar to the iOS equivalent. The differences are down to the different platforms approaches. For example the NativeView and ViewController of iOS are represented with the View and ViewGroup properties on Android. ViewGroup can return null, but if the control uses a ViewGroup derived class for laying out controls that can be returned.

There are some additional methods such as SetLabelFor(id) and UpdateLayout(). The first one supports the accessibility system on Android and allows a descriptive label to be added to another control. The latter calls a helper class the VisualElementTracker to help update the layout.

Beyond these things the concept is very much the same and you handle the same kind of interactions with the Element which is the platform agnostic representation of the control and its properties.

Xamarin Forms Fast Renderers – Part 1 iOS

A Xamarin Forms Renderer provides the device-specific logic to display a Xamarin Forms control using platform-native UI. Traditionally this was done using the ViewRenderer<T,T> base class. What this actually creates in the UI hierarchy is two controls – the outer being a basic place-holder providing layout logic and the inner control being the desired native control (e.g. a UITextField in the case of an Entry on iOS).

This introduces overhead into the UI and complicates the layout logic as the whole page is arranged. The concept of a fast renderer does away with the enclosing ViewRenderer and instead requires you to implement an interface with the standard behaviour required by the Xamarin Forms layout system.

When I began re-writing my MediaElement for inclusion into Xamarin Forms I needed to replace the iOS renderer with a fast renderer but there was very little documentation on building a fast renderer. I found looking through the source for other renderers helpful. The Pages and WebView all use fast renderers in the current codebase.

On iOS this interface is IVisualElementRenderer and it exposes a number of properties, an event and a few methods.

Properties:-

  • Element – returns the Xamarin Forms element which this renderer represents
  • NativeView – returns the native UIView-based control
  • NativeViewController – returns the UIViewController which manages the View

Events:-

  • ElementChanged – raised when an Element is assigned to the renderer

Methods:-

  • GetDesiredSize – returns a SizeRequest from a set of constraints. The control can alter this to fit required content for example. An extension method for UIView provides GetSizeRequest which will calculate the SizeRequest based on the constraints and optional minimum width/height.
  • SetElement – assigns the Element and causes the ElementChanged event to be raised. You’ll also hook up the PropertyChanged event here to react to changes in the Element and apply them to the NativeView.
  • SetElementSize – updates the layout to fit a specific size. Normally you call Layout.LayoutChildIntoBoundingRegion() to perform this.

A secondary interface IEffectControlProvider provides a single method to register an effect with the View.

By looking at the existing in-box renderers I was able to understand how they are implemented and re-write the MediaElementRenderer to use this pattern. For reference the full code of the iOS renderer is here in GitHub.

Not all the renderers in iOS use the new approach, I imagine it will be some time before all the existing renderers are re-written. Currently UWP and other platforms still use the traditional approach. I’ll follow up with the Android equivalent soon.

Talking About Tasks

Back in 2010 Microsoft released Windows Phone 7. It was a huge change from the Pocket PC/Windows Mobile OS which had preceded it and while it brought a modern UI and app-store infrastructure it missed a number of pieces of core functionality from the older phones. One of these was support for Tasks. I set about writing an app which became “Tasks In The Hand” and became very popular in those early days. Even when Microsoft later added basic Tasks functionality in Windows Phone 7.5 the app still had a healthy following because it supported views and features absent from the in-box app.

Skipping forward to today Microsoft’s Task story is rather different. After purchasing Wunderlist they began writing a new app called Microsoft To-Do which is available across multiple platforms – iOS and Android for mobile and Windows for desktop. Crucially though, under the hood, it’s still based on Office 365 (or Outlook.com for personal Microsoft IDs) for storage and so works just as well with the traditional Tasks view in Outlook on the desktop.

Back in 2010 we did not have voice assistants but now we have Alexa, Google Home and Cortana. If you get used to using Microsoft To-Do everywhere, as I have, you miss having integration with a voice assistant and so that is where I decided Tasks In The Hand needed to go next. Today my Alexa Skill was released into the store for anyone to connect with their Echo or similar device.

tasks-alexa-snippet

Tasks In The Hand in the Alexa Store

The skill links with your Microsoft ID, which is either associated with an Office 365 account or an Outlook.com personal account. Once setup you can add tasks to your Alexa To-Do list or items to your Amazon Shopping List and they’ll synchronise with your Microsoft account. You can modify, complete or delete these items via Microsoft To-Do and those changes are synched back to Alexa. After you’ve linked accounts any items you add to your default Tasks folder or Amazon Shopping List folder will also be synchronised with Alexa.

The Skill was built using Azure Functions and I found Matteo Pagani‘s series of Blog posts very useful with getting started working with Alexa Skill Kit. It uses Tim Heuer‘s excellent Alexa.NET package to handle the interactions with Amazon.

The skill is completely free, I hope you find it as useful as I have and please get in touch if you have any feedback.

Capture Android Screen Video from Visual Studio

While debugging your Xamarin Android app in Visual Studio you can capture a video of the device screen and upload it to your PC. To do this open the ADB command prompt from the Xamarin Android toolbar:-

adb command prompt

At the command prompt navigate to a folder where you want the video to end up. Type the following command to start recording:-

adb shell screenrecord /sdcard/filename.mp4

The path you supply must be a valid path on the device with enough space to store the video. The video can be up to 3 minutes long. To stop recording press Ctrl-C at the console window. Then to upload the video type:-

adb pull /sdcard/filename.mp4

Where obviously the path must match whatever you used in the first command. Once it shows that the pull was successful you can open the file and do whatever you need to do with it. You can do basic trimming using the Windows Photos app, and there are plenty of other options for more complex editing…