Monday 23 July 2018

Why doesn't printf / fprintf output anything...


In my build of Colossal Cave to Windows, I had to get rid of getline (as this is a linux function found in <editline/readline.h> and it allocates a string dynamically) by replacing it with fgets. I compiled (under mingw) with no other modifications. And, I played advent.exe for an hour. I was happy.

And then I kicked up my IDE and wrote some node / Javascript to spawn a child_process running my new executable. It should work...

It didn't. My output handler, instead of asking me if I would like instructions, sat there.  Did I fail in launching the process?  I added logging the PID. It looked like it launched. I looked up the PID in taskmgr.  I didn't see it. I looked up the PID in tasklist. There it was. But there was no output.

I searched the internet (using the wrong terms) and found nothing. But then I figured it out. I changed the executable from advent.exe to tasklist.exe. And magic; node dumped out some output... but truncated the last bit. That was the hint.

Programs with pipes work differently than console ttys. Most times you won't notice because a console terminal will by default flush stdout when it hits a newline (on unix at least.) Windows, and as it turns out a few stdio implementations, will buffer any stdout unless fflush is explicitly called or the buffer limit is reached.  This is good.  I/O writes are typically expensive, and if you have a process in the background (or without a terminal) this is a good optimization.

But it is not good if you are expecting a dialog or stream that would flush stdout on the next stdin call. A newline will not flush stdout onWindows (with the stdio implementation under mingw/cygwin.) On unix, you can use stdbuf or unbuffer to make a process not buffer its output. I saw no way to do this in Windows. (There might be, I just didn't see it.)

The solution? Fortunately the getline removal is also my savior. If I flush stdout before calling fgets, advent.exe works the way I expect. But... this won't help me with any and all other Windows commands. I won't always have the code of the command I'm executing (that adds a flush called or setbuf (stdio, null ). When I find it, I'll post...




Wednesday 18 July 2018

How to invoke Cortana channel actions in node.js



Ugh.  It took me a while to figure it out, and I had to reverse engineer some code to do it. There were breadcrumbs, but of course being a newb it wasn't immediately obvious.

Botframework allows you to connect to channels. You are likely invoked from a channel. That means you can send meta-data back to that channel. In the previous version of the framework, you called a method off the Message called channelData to set the JSON response.  But channelData is deprecated in favor of the new sourceEvent method.  So forget the "Launching apps or websites from a Cortana skill" example (that is in C# and the old way)...

The next hint is in the documentation. sourceEvent takes a map.  The old version just took some action in a JSON with a type : LaunchUri and uri : "http://whatever.com/"

What is this map? Well, you can have your bot connected to one or more channels (or "*" for all.) The source code implementing sourceEvent was the final piece. I found examples setting the facebook channel, or using directline... But what is the channel name for Cortana?  Well, it is cortana.

Wrapping the JSON up underneath the channel name and magically Cortana will launch the app.

msg.sourceEvent( 
{ cortana : {
    action : { 
      type : "LaunchUri",
      uri : "https://www.octavianit.com"  
      }
    } 
  } )

But be careful as Cortana will close the channel after invoking the app (that the protocol maps to).  And you should consider that speakers (screenless devices) will not support this behavior and should have a UX alternative.



Tuesday 17 July 2018

Building your own Cortana Music Player

Ever since Amazon killed the cloud Music Storage subscription, I've been annoyed. I uploaded my 2000+ CD collection to the cloud to preserve it and allow me to access it from anywhere.

I've been looking for an alternative, but didn't find anything as cost effective as Amazon's dead program ($25 for the year? a steal). Google had a Google Music Play plan that included YouTube Red for $15/mo. But that cost made me uncomfortable (that, and being on a monthly plan.)

But I believe I found a solution!  OneDrive... you get 1TB of storage with an Office 365 account for $99/yr. That is value, and you get Office too.

But here is what I find more exciting.

I got my Invoke today and had to (like really) see if I could build my own music player... Out of the box Cortana will only let you hook up to streaming services. But you can't play music from your PC or OneDrive (unless you BlueTooth to the device).

How hard can it be to build a skill to play your music from OneDrive? As it turns out - not hard at all.

Amazon Music continually griefed me because it never kept my songs together in their albums when imported (as were imported via iTunes). But when I synced my library to OneDrive - the directory structure is intact. And as it happens, the OneDrive REST API will let you retrieve your directories and walk the files... and you can use your MSA authentication to keep it all personal or share those files...

So, how hard is it to get Cortana and botframework to play an MP3 you have stored on OneDrive?  This easy.

var audioCard = new builder.AudioCard(session)
        .media([
            { url : 'https://onedrive.live.com/download?cid=00E75C36F57E8A5B&resid=E75C36F57E8A5B%216254&authkey=AEAEHi1WUjheHj4' }]);  
var msg = new builder.Message(session)
 .addAttachment(audioCard)
 .text('Now playing Nephatiti by 808 State)
 .speak('<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xml:lang="en-US">Playing Nephatiti by <say-as interpret-as="number_digit">808</say-as> State.</speak>');
session.send(msg).endConversation();

Edit: It is extremely important to end the conversation after sending an audio card on Windows because if Cortana has a dialog going, regardless of any input hint, the volume will be set low. Ending the conversation keeps the volume at 100% at the expense of disconnecting Cortana from the bot.  Also, Cortana will ignore every field in the audio card (like title).

What will be even more fun will be using the language recognition services to solve another Alexa pet peeve... search for my classic and latin music titles!


Monday 16 July 2018

End to End Developer example video of basic Cortana skill






For those wanting a visual walk through of the process to create a Cortana skill with botframework!

Uses node.js.

Friday 13 July 2018

Get your Azure QnA Bot to Speak with Cortana

Building a QnA bot and hooking it up to Cortana is simple in node.js under BotFramework V3.

If you have a FAQ that is in Q: A: format you can import it via the Azure QnA Maker tools and auto-create a "knowledge base".  No coding required. You can do this here https://www.qnamaker.ai.

The next step is creating your bot. Microsoft has standardized on their botframework to do this. https://dev.botframework.com is your gateway.

If you are like me and like the simplicity of node.js then pick the QnA template.

Go back to qnamaker and view the code to extract the QnA keys and host.
Then go back to the Azure portal and update your bot for the Application Settings blade sections shown here.

The test web app will now be successfully linked to your QnA bot! But don't forget the last step...

Go to your Channels blade and set up Cortana. Then, go to the build blade and open the online editor. In the app.js code, you will see that the template uses the standard QnA dialog builder - that does not say the resulting answers back with the Cortana speech channel. Add an override like this.


(See GitHub QnAMaker patch for a V4 node example.)

There you have it. Now "Hey Cortana, ask Bernie Question Bot Test what is a dwarf planet?"




For C#, it is slightly more complicated. You need to subclass the the BasicQnAMakerDialog with something that sends a message with speak attached. This is done as an inner class of RootDialog (from the V3 C# template that comes with Azure Web App Bots.

    // Dialog for QnAMaker GA service
    [Serializable]
    public class BasicQnAMakerDialog : QnAMakerDialog
    {
        // Go to https://qnamaker.ai and feed data, train & publish your QnA Knowledgebase.
        // Parameters to QnAMakerService are:
        // Required: qnaAuthKey, knowledgebaseId, endpointHostName
        // Optional: defaultMessage, scoreThreshold[Range 0.0 – 1.0]
        public BasicQnAMakerDialog() : base(new QnAMakerService(new QnAMakerAttribute(RootDialog.qnaAuthKey, RootDialog.qnaKBId, "No good match in FAQ.", 0.5, 1, RootDialog.endpointHostName)))
        { }

        // Override to also include the knowledgebase question with the answer on confident matches
        protected override async Task RespondFromQnAMakerResultAsync(IDialogContext context, IMessageActivity message, QnAMakerResults results)
        {
            if (results.Answers.Count > 0)
            {
                IMessageActivity response = context.MakeMessage();
                response.Text = "Here is the match from FAQ:  \r\n  Q: " + results.Answers[0].Questions[0] + "  \r\n A: " + results.Answers[0].Answer;
                response.Speak = response.Text;
                response.InputHint = "acceptingInput";
                await context.PostAsync(response);
            }
        }
    }

(See GitHub QnAMaker patch for a V4 C# example.)

Getting the most out of your free Azure subscription for development.

I discovered why my Azure free trial was eating cash. It seems that the free trial defaults to reasonable paid service tiers, and not the free development tiers. So when you set up your Microsoft account (MSA) for development purposes, make sure you double and tripple check that the plan you pick starts with an F (for free) and not an S.

When you use bot builder or QnA builder, you might need to go back after you create a project from template and change the "App Service Plan" to something free. If you see your $ decreasing and you're not doing anything - you've configured something incorrectly.


Thursday 12 July 2018

Building Bots in Azure

I am starting to build Cortana bots. So far its been interesting.

I had trouble building the Azure function bots from template. The node.js example for the simple "echo" bot... First crack, the Azure "Test in Web Chat" didn't work. Errors implied there was a permission issue.

I tried again, and had a deployment error on the bot function template.

I tried again, and on the third try it deployed. To my knowledge, I did nothing differently.

The advantage of a function bot over a web app bot is supposedly pay per invocation (that should be cheaper, right)? The issue with the example code for node.js on the function bot is that every potential used library is embedded in the index.js code where the two line "echo" functionality is embedded in the middle!

WHY?  Why? Well, javascript as a language doesn't have a '#include' statement. Client side, you do the includes on the document for your browser to take care of.

In node, we have requires... that allows us to load modules. But why is this not used in the function bot example? I figure its a work around. But the down side is this: every time I edit the function in the portal for this example, I am touching a 200K line file!