Le Post Infeeny

Les articles des consultants et experts Infeeny

//Build 2017 – .NET Core / Standard / ASP.NET Core 2 / Cognitive Services

For this first day at Build, I attended several sessions revolving around .NET Core and the Cognitive Services running on Azure.

Three Runtimes, one standard… .NET Standard: All in Visual Studio 2017
Scott Hanselman, Scott Hunter

This session was more about a global overview of all the major frameworks available in Visual Studio than an in-depth talk on .NET Standard.

We were reminded that Microsoft recently put reference resources for Architecture in .NET on https://dot.net/architecture

To start, the Scotts explained us what are the available .NET Platforms (.NET FX, .NET Core, Mono) and how .NET Standard helps for easier code sharing across platforms.
One sentence that I found particularly relevant for describing .NET Standard is « .NET Standard is a contract that .NET Platforms must comply to »

Nothing was announced for the availability of .NET Standard 2.0 (that is currently in preview) but as said during the Keynote, .NET Core 2.0 and ASP.NET Core 2.0 are available today as preview also.

Microsoft estimates that currently 70% of the NuGet Packages are .NET Standard 2.0 compliant due to the large new API surface covered by this new version of .NET Standard.
Those packages don’t need to be recompiled to be used on any platforms!
The other 30% are mostly framework-specific like WPF libs or ASP.NET libs, so they won’t ever be .NET Standard compliant.

When VS2017 was first introduced, it came with a Live Unit Testing feature that were only usable with .NET FX projects.
It was announced that the upcoming VS2017 Update 3 (currently in preview) will add this feature to .NET Core projects!
This update will also come with support for C# 7.1.

After that we were given a brief tour of Visual Studio for Mac and how it come closer and closer to the classic Visual Studio: same tooling, same project templates, and so on.

A conference named « .NET Conf » will take place in September 2017 for 3 days and will cover any kind of subject about .NET.

Introducing ASP.NET Core 2.0
Daniel Roth, Maria Naggaga Nakanwagi, Scott Hanselman

This session was about the differences and new features introduced by ASP.NET Core 2.0 preview compared to ASP.NET Core 1.1.
I recommend that you watch the replay as there was a lot of technical goodness that I can’t really transcribe here.

If you want to develop ASP.NET Core 2.0 apps, you will need VS2017.3 currently in preview.
Microsoft says that ASP.NET Core 2.0 is 25% faster than 1.1.

The first thing that we saw was the dotnet cli and how easy and quick it is to bootstrap an ASP.NET Core app without ever going into Visual Studio.
It can be resumed by those two commands : dotnet new, dotnet run.
The first one creates a sample ASP.NET Core project and the second one compiles and runs it.

« dotnet new » will bootstrap an ASP.NET Core project based on the version of the dotnet tooling.
This version can be set by a « global.json » file placed either in your user folder (to globally define the dotnet tooling version) or per folder.
That way you can have dotnet 1.1 running inside a folder, and dotnet 2.0 running inside another one.

One great thing that was announced is Microsoft resolved several shortcomings of the new « all NuGet packages » strategy.
.NET Core introduced a lot of small packages especially for ASP.NET Core.
For example, you had one packages for servicing Static Files, one for MVC, one for Configuration, and so on.
This resulted in a lot of packages to download on our machines and significantly large packages to publish in production (every dlls that were used were uploaded along our web app)

Those shortcomings were resolved by a single new NuGet package named « Microsoft.AspNetCore.All », this package includes all the small packages used almost all the time like MVC.
And to avoid downloading gigabytes of packages, this « Microsoft.AspNetCore.All » is already included in the dotnet tooling version 2.0.
That means that when publishing our web app, we only need to deploy our own dll, resulting in way smaller package.

ASP.NET Core 2.0 introduces a new startup pattern.
DI configuration is not done in the Startup.cs file anymore, but in the Program.cs file.

ASP.NET Core 2.0 also introduces a new version of Web Pages.
We can now create Razor pages (cshtml) that don’t need to be backed by a controller.
For that, just create a folder named « Pages » and create cshtml files inside it.

Those cshtml files need to have the « @page » tag defined inside them to be treated as Pages.

Then if you call the url « http://localhost/mypage », it will resolve to the /Pages/mypage.cshtml. No need for a controller that would only return a view.

More complex scenario are enabled by defining inlined « PageModel » which is a class that represents the « @page » tag.
You can then define behaviors for GET,PUT,POST,etc. verbs and also read parameters passed by url.
Like for this url « http://localhost/mypage/5 », you can define the following @page « {id: int} ».
The PageModel instance of the page will then have access to the id passed by the url.

It is no more necessary to add the Application Insights package when creating ASP.NET Core apps.
When deploying to Azure, Azure will recognize the type of application and will propose to automatically inject Application Insights.
This also enables the new « Snap points » feature for debugging live production apps directly from Visual Studio.

Using Microsoft Cognitive Services to bring the power of speech recognition to your apps
Ivo Santos, Khuram Shahid, Panos Periorellis

This session talked about the speech recognition services available in Cognitive Services, especially TTS/STT, Custom Speech Service and Speaker Recognition.

Speech API is rather classic and enables Text-to-Speech and Speech-to-Text scenario.
Using it is simple:

Custom Speech Services is interesting as it allows to finely customized speech recognition based on our business requirements.
An example that was given is a kiosk in an airport where customers would ask for the next flight to a city in a noisy environment (announcements, chattering, etc)
Custom Speech Services offers a portal where we can train the speech recognition AI with sample acoustic data, and we can also upload vocabularies applied to our business.

Using Custom Speech Services in code is exactly the same as Speech API, you just need to specify an url given by the portal that represent your trained AI.

Lastly we were demoed the new Speaker Recognition API that recognizes who speak.
This is particularly promising as it is an on-the-fly trained AI which differentiates talking people at first, and then is capable of recognizing the same person throughout the conversation.
The demo was a bot running a trivia game, and which was attributing points to the person answering just by recognizing who talked. But it didn’t really worked.

//Build 2017 Day1

Cette année la build est de retour à la « maison » à Seattle. Avec @timothelariviere nous couveront les trois jours de la conférence.

Contrairement aux anciennes version de la //Build, cette année la première journée est consacrée à l’IA et Azure

Les annonces de la Keynote:

  • Quelques chiffres:
    • 500 millions de devices sous Windows 10 !
    • 100 millions d’utilisateurs actifs office 365
    • 12 millions d’organisations dans l’Azure Active Directory
    • 90% des TOP 500 entreprises utilisent le Cloud de Microsoft
  • Microsoft passe du Mobile First Cloud First au Intelligent Cloud/Intelligent Edge
  • Azure Iot Edge (fonctionne sur Windows/linux et les petits devices)
  • Plusieurs démos qui démontrent l’utilisation de l’IA : Workplace safety (analyse en temps réel de flux vidéos pour détecter les risques sur un chantier) , Intelligent meetings, Cortana …
  • Azure :
    • Cloud shell : CLI disponible directement dans la version Web (et app) d’azure, sans rien installé !
    • Azure mobile App dispo sous iOS et Android (la version UWP arrive bientôt)
    • Debug de la prod avec des snapshot (super feature)
    • MySQL et PostgresSQL as a service (grosse annonce)
    • Azure Cosmos DB
    • Azure Stack (l’extension d’azure pour avoir un « Azure en local »)
  • Visual Studio 2017 for mac passe en GA avec .Net Core 2.0 en preview !
  • Support des Azure Functions et des logic Apps dans VS 2017
  • Cognitive Services : 4 nouveaux services : Bing custom search, custom Vision Service, Custom Decision Service et Video indexer
  • Bots : Trois nouveaux canaux : Cortana Skills, Skype for Business (ENFIN !) et Bing.
  • AI translation for PowerPoint : plugin de traduction en live d’une présentation PowerPoint

Et pour finir, je vous laisse admirer cette vidéo =>

//Build 2017 – AI Immersion Workshop

Ahead of the three days of the Build, Microsoft organized a few whole day sessions.
I attended one of them : the « AI Immersion Workshop » session

As the title gives in, it was focused on AI running on Azure.

It started with a keynote presenting all the available services for AI and Machine Learning on Azure.
To begin, we were presented the Cognitive Services (face recognition, image description, and so on).
You can find more on this here: https://azure.microsoft.com/en-us/services/cognitive-services/
The source code of a sample application using Cognitive Services can be found here: https://github.com/Microsoft/Cognitive-Samples-IntelligentKiosk

After that, the keynote focused on machine learning and image recognition.
A sample app on GitHub (https://github.com/Azure/Embarrassingly-Parallel-Image-Classification) shows how to use machine learning to analyze aerial images and determine what kind of terrain it is.

There was also a few customer stories about machine learning : lung cancer detection and electric pole inspection by drones.

We were reminded that SQL Server 2017 was available for a few weeks now, and that it supports AI stored procedures written in R or Python.
This is especially interesting as it is a best practice in AI/Machine Learning to put the computation logic close to the data.

After this keynote, there were a handful of workshops.
I chose to attend « Building intelligent SaaS applications »

In this workshop, we deployed a multitenant ASP.NET website that sell venue tickets, and we configured machine learning for it to answer questions like :

  • What venues are the least performing?
  • Will my future venue sellout if placed in Seattle?
  • How much a discount should I give for my venue to sellout?
  • Is there customers that go to multiple venues in a given area?

Everything we did in that workshop can be found on GitHub : https://github.com/Microsoft/WingtipSaaS

Infeeny présent à la Microsoft Build 2017 à Seattle pour mieux innover !

Build_2017_Infeeny
Conférence annuelle Microsoft Build 2017 à Seattle (États-Unis)

Mehdi et Timothé, deux experts Infeeny (de la Practice Applications & Digital), sont « nos reporters » du 10 au 12 mai 2017 en direct de Seattle pour assister à la Microsoft Build 2017* et nous faire un retour en avant première sur les nouveautés annoncées lors de cette conférence annuelle qui va se dérouler au Washington State Convention Center à Seattle (Etats-Unis).

Pour suivre la conférence « Build » en ligne : build.microsoft.com

La première Keynote a lieu le mercredi 10 mai à 17h et le programme est disponible sur : https://mybuild.microsoft.com/sessions

*La Microsoft Build s’adresse aux développeurs de logiciels et d’applications fonctionnant sur le système d’exploitation Windows.

Power BI : nouvelle offre, nouvelle tarification !

A partir du 1er Juin 2017, la tarification de Power BI évolue

Nous avons désormais 3 offres:

  • Power BI Free
  • Power BI Pro
  • Power Premium

Lire la suite

Compiler et minifier les fichiers .less en fichier min.css

Objectif

L’objectif de cet article est d’ajouter une tâche Gulp à vos build définis sur VSTS pour compiler et minifier les fichiers .less.

Prérequis

  • Un projet VSTS avec une application web ASP.NET.
  • Une définition de build pour l’application ASP.NET du projet
  • Installer Node.js :
    • Sur le poste de développement pour définir la tâche Gulp
    • Sur le serveur de build, pour exécuter la tâche Gulp

Création de la tâche Gulp

La création de la tâche Gulp pour compiler et minimiser les fichier .less en fichier .min.css, est faite sur un poste de développement au sein de visual studio 2015.

Poste de développement

Prérequis

En plus de l’utilisation de visual studio 2015, il faut installer :

  • Node.js : pour la gestion et l’exécution des package npm. A télécharger ici
  • Task Runner Explorer: Il s’agit d’une extension pour Visual Studio afin de tester et gérer les tâches Gulp. A télécharger ici

Configurer gulp pour l’application web

Il faut ajouter au projet de l’application web 2 fichiers afin de configurer gulp :

  • gulpfile.js : contient les définitions des tâches Gulp à exécuter
  • package.json : contient les listes des packages nécessaire à l’exécution des tâches définies dans le fichiers précédents

Script de la tâche gulp « less » pour compiler et minifier les fichiers .less en .min.css

var gulp = require('gulp');
var less = require('gulp-less');
var path = require('path');
var watch = require("gulp-watch");
var rename = require('gulp-rename');
var minifyCSS = require('gulp-minify-css');
var plumber = require('gulp-plumber');

gulp.task('less', function () {
    return gulp.src('./Content/**/*.less')
    .pipe(plumber())
      .pipe(less({
          paths: [path.join(__dirname, 'less', 'includes')]
      }).on('error', function (err) {
          console.log(err);
      }))
      .pipe(minifyCSS().on('error', function(err) {
            console.log(err);
       }))
        .pipe(rename({suffix: '.min'}))
      .pipe(gulp.dest('./content/'));
});
Et ci-dessous le fichier de configuration package.json associé
{
  "name": "Mon projet",
  "version": "1.0.0",
  "description": "",
  "main": "index.js",
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1"
  },
  "author": "",
  "license": "ISC",
  "devDependencies": {
    "gulp": "^3.9.1",
    "gulp-less": "^3.3.0",
    "gulp-rename": "^1.2.2",
    "gulp-minify-css": "^1.2.4",
    "gulp-watch": "^4.3.11",
    "gulp-plumber": "^1.1.0"
  }
}

 Tester la tâche Gulp

Sous Visual Studio :

  • Cliquez droit sur le fichier gulpfile.js
  • Dans « l’explorateur de solution »cliquez sur « Task Runner Explorer »Menu_Task_runner.png
  • Dans la fenêtre de « Task Runner Explorer », Cliquez droit sur la tache « install » du package.json
  • Cliquez sur « Run »pour installer les packages en local à la solutionRun_Task_Install.png
  • Dans la fenêtre de « Task Runner Explorer », Cliquez droit sur la tache « less »
  • Cliquez sur « Run »pour exécuter le test de la tâcheRun_Task_runner

Ajouter la tâche au build VSTS

Prérequis sur le serveur de build

Le serveur de build doit contenir tout le nécessaire à la compilation de la solution. Node.js doit donc être installé sur le serveur de build. A télécharger ici

Modification de la définition du build VSTS

  • Connectez sur VSTS
  • Allez dans « Build & Realease »/Builds
  • Editez la définition du build pour lequel vous souhaitez ajouter la tâche Gulp
  • Ajoutez une tache « NPM »Add_Task_Npm
  • Configurez cette tache afin d’installer les package npm nécessaire à l’exécution sur la tâche gulp précédemment créée: « less »
    • Working folder : répertoire du projet contenant le fichier package.json
    • npm command : install
    • arguments : -configConfig_Task_Npm.png
  • Ajoutez une tâche « Gulp » afin d’exécuter la tache « less »Add_Task_Gulp
  • Configurez cette tâche afin d’exécuter la tâche gulp précédemment créée: « less »
    • Gulp File Path : Chemin du répertoire du projet contenant le fichier gulpfile.js
    • Gulp Task(s) : nom de la tâche à exécuter
    • Arguments : Arguments nécessaire à la tâche
    • Working Directory : Chemin du répertoire de travail, comme on travaille avec les packages npm définis pour le projet, il s’agit du répertoire du projetConfig_Task_Gulp.png
  • Cliquez sur « Save »

Maintenant votre build VSTS exécute une tâche Gulp pour compiler et minifier les fichiers .less !

 

Getting started with Azure Search and the .NET SDK

search

In order to provide an alternative to ElasticSearch as a more convenient and straightforward solution, Microsoft introduced Azure Search, an Azure service based on ElasticSearch. Both solutions provide a dedicated environment for indexing and querying structured or semi-structured data. Azure Search, however, focuses on simplicity, to the expense of some of the features you may expect if you come from the more complex engines.

For starters, Azure Search is more rigid: it is contract-based, meaning you have to define the indexes and the structure of your documents (indexed data) before you can index anything. The document structure itself is simplified and aimed at simple use cases and you won’t have all the options ElasticSearch can offer. The most important limitation to keep in mind is that you cannot include complex types in your document structure.

You should really be aware of these limitations when considering Azure Search as a production tool. But if you’re going for a quick, low maintenance and scalable solution, it may very well fit your needs.

Now if you’re still there, let’s start by deploying it on Azure.

Deploying your Azure Search service

Provided you already have an Azure subscription running, setting Azure Search up couldn’t be easier. Just go to the official service page and hit « Try search azure now »… or just click this link.

AzureSearchDeploy

The Azure Search service creation page. Tier prices are removed because they may vary

The service configuration is straightforward: just enter an identifier to be part of the URL of your search service (in the example, « sample-books »), select the usual Azure subscription, resource group and location, and choose the pricing option that best matches your needs. You can even select a free plan if you just want to try it out.

Once you hit the Create button and the service is deployed, we can access its dashboard page and start configuring it.

AzureSearchDashboard

Our new Azure Search service’s Overview page

As you can see there, the service is available right away at the URL provided in the configuration, listed under URL on the overview page.

However, in order to make it operational, we have to create an index. If you’re not familiar with indexes, they are like collections of documents (data) sharing a similar structure, that are processed in order to optimize searches. Because documents live in an index, we are not going anywhere without one.

So let’s get this going by clicking the Add index button.

AzureSearchIndexCreation.png

The Azure Search index creation page

This takes us to another config view where you have to input the name of the index, and the structure of the documents in the index. The structure works pretty much like any graphic database table creation form — you can add fields with a type, and a handful of options that let the engine know how your document should be processed, which in turn affects how the fields behave in search queries.

The mandatory « id » field that comes pre-created will be used as a unique identifier for our documents — if you are trying to index a document and the id has already been indexed, the existing document will be updated.

In our example, each document represents a book. So we set up a few fields that we want indexed for our books.
Here is a quick breakdown of the options for your fields:

  • Retrievable: determines whether the field will be included in the query responses, or if you want to hide it;
  • Filterable: determines the ability to filter on the field (e.g. take all documents with a pageCount value greater than 200);
  • Sortable: determines the ability to sort by the field;
  • Facetable: determines the ability to group by the field (e.g. group books by category);
  • Searchable: determines whether the value of the field is included in full text searches

Our example is set up so that the title, author and description are processed in full search, but not the category. This means that a full text search query for « Mystery » will not include books of the category Mystery in the results.

Once you are done with the creation, your index is ready, although still empty and sad… so let’s fix that!

Indexing data

The next thing to do is indexing actual documents. In our example, this means indexing books.

There are two ways to do this:

  • Adding a data source and an indexer, meaning that Azure Search is going to crawl your data source (Azure Storage, DocumentDB, etc) periodically to index new data;
  • Indexing documents through the REST API, either directly, or indirectly with an SDK.

And of course, nothing prevents you from doing both. But in our case, we are going to index documents programmatically, using C# and the .net Azure Search SDK.

So let’s dig into the coding. As a side note, if you’re allergic to code, you can skip right to the start of the next part, where we play around with Azure Search’s query interface.

First, we’re going to create a console application and add the Azure Search SDK NuGet package.

AzureSearchNugetPackage

Installing the Azure Search SDK through the NuGet Package Manager view on VS2015

Alternatively, you can run the following NuGet command:

> Install-Package Microsoft.Azure.Search

Next, we are going to need a Book POCO class with properties matching the indexed fields. Rather than breaking the naming conventions of C# and using camelCase for our properties, we are going to use the SerializePropertyNamesAsCamelCase attribute to tell the SDK how it is supposed to handle our properties when uploading or downloading documents.

So here is our Book.cs:

[SerializePropertyNamesAsCamelCase]
public class Book
{
    public string Id { get; set; }
    public string Title { get; set; }
    public string Author { get; set; }
    public string Description { get; set; }
    public string Category { get; set; }
    public int PageCount { get; set; }
    public bool IsAvailableOnline { get; set; }
}

Next, we will need to create a client that connects to our Azure Search service. Using a bit from the official documentation, we can write the following method:

private static SearchServiceClient CreateAdminSearchServiceClient()
{
    string searchServiceName = "sample-books";
    string adminApiKey = "Put your API admin key here";

    SearchServiceClient serviceClient = new SearchServiceClient(searchServiceName, new SearchCredentials(adminApiKey));
    return serviceClient;
}

Note that you can find your keys in the Azure Service page, under the « Keys » section. You have to use an admin key in order to create indexes or index documents.
AzureSearchAdminKey.png

Now let’s write a method that indexes a few sample books:

private static void UploadDocuments(ISearchIndexClient indexClient)
{
    var books = new Book[]
    {
        new Book()
        {
            Id = "SomethingUnique01",
            Title = "Pride and Prejudice",
            Author = "Jane Austen",
            Category = "Classic",
            Description = "Set in the English countryside in a county roughly thirty miles from London...",
            IsAvailableOnline = true,
            PageCount = 234
        },
        new Book()
        {
            Id = "SomethingUnique02",
            Title = "Alice's Adventures in Wonderland",
            Author = "Lewis Carroll",
            Category = "Classic",
            Description = "Alice was beginning to get very tired of sitting by her sister on the bank...",
            IsAvailableOnline = true,
            PageCount = 171
        },
        new Book()
        {
            Id = "SomethingUnique03",
            Title = "Frankenstein",
            Author = "Mary Wollstonecraft Shelley",
            Category = "Horror",
            Description = "You will rejoice to hear that no disaster has accompanied...",
            IsAvailableOnline = true,
            PageCount = 346
        }
    };

    // Make a batch with our array of books
    var batch = IndexBatch.MergeOrUpload(books);

    // Query the API to index the documents
    indexClient.Documents.Index(batch);
}

As you can see, the SDK allows us to use directly our Book objects in its upload methods, performing the REST API query for us.

Note that for the purpose of simplicity, we’re not handling exceptions, but you should really do it in production code.

Also keep in mind that your documents will not be instantly indexed. You should expect a little delay between document upload and their availability in index queries. The delay depends on the service load, but in our case a few seconds should be enough.

So let’s set up our program to call these methods and index the books.

static void Main(string[] args)
{
    var serviceClient = CreateAdminSearchServiceClient();
    // Get the index client by name - use your index name here
    var indexClient = serviceClient.Indexes.GetClient("mybookindex");
    UploadDocuments(indexClient);
}

After running the program, if it ran alright and after the aforementioned delay, you should have your data indexed already.

You can check that your documents have been uploaded on the Azure dashboard page.

AzureSearchDocumentsIndexed.png

On the Azure overview page for our service, we can see that there are 3 documents indexed

Alright! When you’re done with the indexing, all that’s left to do is query!

Querying documents

So, our Azure Search service is up and running, with an operational index and some documents to go along. Let’s get to the core feature: querying documents.

For the purpose of illustration and to get a better understanding of what we will be doing next with the SDK, we are going to start with the Azure Search query interface called Search Explorer.

And sure enough, you can access it through the Search Explorer button on the overview dashboard.

AzureSearchExplorer0.png

The Azure Search Explorer default view

The Query string field roughly corresponds to the part after the « ? » in the URL when you query the rest API in a get request.

In the Request URL field below, you can see the full URL that will be called to execute your query.

And finally, the Results field shows the raw JSON response from the service.

Now let’s try it out with some examples:

AzureSearchExplorer2

An example of a full text search

In this example, we are searching for the term « disaster ». This will cause Azure Search to perform a full text search on every field that is marked as Searchable in the index document structure. Because the book « Frankenstein » has the word « disaster » in its description field, and that field is marked as Searchable, it is returned.

If we replace our search term with « Horror », the service returns no results, even though the value of category is literally « Horror » in the case of Frankenstein. Again, this is because our category field isn’t Searchable.

AzureSearchExplorer3

An example of a search using filters

This second example retrieves all books with more than 200 pages. I won’t explain the whole syntax here because there would be too much to write and it is already explained in the search documentation. In essence, we are using the $filter parameter to limit results to the documents satisfying the condition « pageCount gt 200 », which means that the value of pageCount has to be greater than 200 for a document to pass the filter.

Now that we have some clues about how the search API works, we are going to have the SDK do half of the job for us. Let’s go back to our C# .net project.

The first thing we want to start with when querying is a SearchServiceClient… and I know we already built one in part 2, but we are not going to use this one. When you are only querying, you’ll want to use a query API key instead of an admin key, for security reasons.

You can get those keys in the Keys section of the Azure Search service page, after clicking the Manage query keys link.
AzureSearchQueryKeys.png
You are free to use the default one. In my case, I added a new key called « mykey » because I don’t like using an unnamed key and obviously « mykey » is much more descriptive.

So let’s write our new client creation method:

private static SearchServiceClient CreateQuerySearchServiceClient()
{
    string searchServiceName = "sample-books";
    string queryApiKey = "YOUR QUERY API KEY HERE";

    SearchServiceClient serviceClient = new SearchServiceClient(searchServiceName, new SearchCredentials(queryApiKey));
    return serviceClient;
}

Of course this is almost the same code as before and we should really refactor it, but I’m leaving that as an exercise to the reader. For the sake of simplicity, of course.

Once we got that, we are going to write the methods that query our books. Let’s just rewrite the tests we have done with the Search Explorer, using the SDK. We will write 2 separate methods, again for the sake of clarity:

private static Book[] GetDisasterBooks(ISearchIndexClient client)
{
    // Query with the search text "disaster"
    DocumentSearchResult response = client.Documents.Search("disaster");

    // Get the results
    IList<SearchResult> searchResults = response.Results;
    return searchResults.Select(searchResult => searchResult.Document).ToArray();
}

private static Book[] GetBooksWithMoreThan200Pages(ISearchIndexClient client)
{
    // Filter on documents that have a value in the field 'pageCount' greater than (gt) 200
    SearchParameters parameters = new SearchParameters()
    {
        Filter = "pageCount gt 200"
    };

    // Query with the search text "*" (everything) and include our parameters
    DocumentSearchResult response = client.Documents.Search("*", parameters);

    // Get the results
    IList<SearchResult> searchResults = response.Results;
    return searchResults.Select(searchResult => searchResult.Document).ToArray();
}

What we can see here is that you have all the URL parameters we can use in the optional SearchParameters object, except the search text itself, which is specified as a separate parameter of the Search method.
And once again, the SDK is capable of using directly our Book class and retrieves Book objects by deserializing the response from our Azure Search service, in a transparent way.

Now let’s use these methods in our program:

static void Main(string[] args)
{
    var queryServiceClient = CreateQuerySearchServiceClient();
    var queryIndexClient = queryServiceClient.Indexes.GetClient("mybookindex");

    var disasterBooks = GetDisasterBooks(queryIndexClient);
    Console.WriteLine("GetDisasterBooks results: " + string.Join(" ; ", disasterBooks.Select(b => b.Title)));
    var moreThan200PagesBooks = GetBooksWithMoreThan200Pages(queryIndexClient);
    Console.WriteLine("GetBooksWithMoreThan200Pages results: " + string.Join(" ; ", moreThan200PagesBooks.Select(b => b.Title)));

    Console.ReadKey(false);
}

The client part is similar to what we did when we were indexing documents, and the rest is just getting the results from the query methods and displaying them with Console.WriteLine.

And running this program gets us this beautiful output:

GetDisasterBooks results: Frankenstein
GetBooksWithMoreThan200Pages results: Pride and Prejudice ; Frankenstein

Going deeper

We have seen how to deploy an Azure Search service, how to create and configure its indexes, and how to use the SDK for both indexing and querying documents. As mentioned in the previous parts, there is a bit more that you can do with Azure Search and that go beyond the scope of this article.

If you want to go further, here are some points we haven’t discussed, along with links providing documentation on the topic:

Thanks for reading and I hope this article at least made you want to read more classical literature.

[SSRS][PowerBI] Introduction a Power BI dans le Server de Rapport SQL Server (SSRS)

Suite à de nombreuses demandes d’utilisateurs ne souhaitant pas voir leur donner partir dans le cloud Microsoft.

Microsoft à mis en place 2 solutions à dispositions pour la partie Power BI :

  • La mise en place d’une gateway permettant de ne pas exporter les données dans le cloud
  • La création d’un serveur de rapport SSRS permettant d’héberger des rapports Power BI conçu à partir de Power BI Desktop

Dans cette introduction nous allons étudiés la solution numéro qui vient de sortir en mode pré view dans la vNext de SQL Server

Lire la suite

Mettre en place du déploiement continue en local depuis VSTS

Objectif

L’objectif de cet article est de vous présenter comment mettre en place une chaine de déploiement continue sur un serveur IIS local, hébergé sur votre réseau, à partir de code source stocké dans Visual Studio Team System.

Contexte

Vous stockez  sur VSTS les code sources de vos différents projets applicatifs, mais vous aimeriez mettre en place un chaine de déploiement continue sur vos propres serveurs OnPremise n’ayant pas accès à internet à partir du code source contenu dans votre environnement VSTS.

La solution est de passer par un serveur build intermédiaire qui effectuera aussi le déploiement en local. Ce dernier  aura donc un accès à VSTS et au serveur cible IIS hébergé dans le réseau interne.

VSTS_BuildInterne

Serveur de Build

Nous avons utilisé dans notre cas un serveur de Build avec Windows Server 2012 R2. Mais il est possible d’utiliser le poste de développement en guise de serveur de Build, à condition de respecter les prérequis suivant :

 

L’environnement de développement doit être installé sur le serveur de build et l’agent de build, afin que celui-ci puisse effectuer la compilation du code source contenu dans le VSTS.

Installation et Configuration

Environnement de développement

Il est préférable d’installer et de configurer l’environnement de développement avant de configurer l’agent de build, car ce dernier détecte automatiquement l’environnement du serveur lorsqu’il se connecte à VSTS.

Dans notre cas nous avons installé :

  • Visual Studio 2015 Enterprise, version minimum recommandée
  • NodeJs pour la compilation des less avec gulp
  • Framework 4.6.2

Agent de build

Prérequis de sécurité

Pour installer et configurer l’agent de build, utiliser un compte qui soit à la fois Administrateur du Serveur de build et qui soit administrateur dans VSTS.

Authentification avec un « Personal Access Tokens »
  • Connectez vous sur votre VSTS avec le compte administrateur que vous souhaitez utiliser
  • Ouvrez le profil du compte
  • Allez dans sécurité

PAT_1

  • Allez dans Personal Acces Tokens
  • Cliquez sur PAT_2
  • Cochez seulement le scope Agent Pools (Read, manage), et veillez à laisser les autres décoché

PAT_3.png

  • Cliquez ensuite sur PAT_4

PAT_5

  • Copiez le token ainsi obtenu

Note : Veillez à conserver précieusement le token généré, il sera utile pour configurer et supprimer l’agent.

Installation

Pour installer l’agent sur le serveur de build :

  • Connectez vous sur le serveur de build, avec un compte administrateur
  • Depuis le serveur, connectez vous sur votre VSTS avec un compte administrateur.
  • Allez dans Settings\Agent Pools
  • Cliquez sur  DownLoadAgnet

Une fenêtre contenant les commandes à effectuer pour installer sur différent OS s’ouvre.

DownLoadAgent_2

Lancer le téléchargement à la fin de celui-ci exécuter les commandes PowerShell du bloc Create Agent. Ceci décompressera le zip précédemment téléchargé dans le répertoire C:\agent.

Configuration

Création du pool

Lors de la configuration de l’agent de build, nous allons le rattacher à un pool que nous avons spécialement créé :

  • Connectez vous sur votre VSTS avec un compte administrateur.
  • Allez dans Settings\Agent Pools
  • Cliquez sur « New Pool… » New_Pool
  • Dans la fenêtre de saisie, entrez le nom du nouveau pool : « Interne »

Create_Pool

Agent de build

Nous pouvons passer, maintenant, à la configuration de l’agent de build :

  • Dans la console PowerShell  en tant qu’administrateur, placez vous dans le répertoire où a été installé l’agent
  • Exécuter la commande :Configuration_agent_1

Un assistant de configuration en ligne de commande PowerShell démarre :

Connexion à VSTS :

Configuration_agent_Connection_PS_3
  • Saisissez l’URL VSTS : https://{your-account}.visualstudio.com
  • Sélectionnez le type authentification PAT (précédemment configuré avec le compte que vous utilisé)
  • Saisissez le jeton précédemment généré

Inscription de l’agent sur VSTS :

Configuration_agent_Inscription_PS_4

 

  • Saisissez le nom du pool d’agent précédent créé : Interne
  • Saisissez le nom de l’agent, par défaut il vous propose le nom du serveur : BuildInterne
  • Saisissez le dossier de travail de l’agent : _workVSTS
  • Choisissez de l’exécuter en tant que Service, en saisissant ‘O’
  • Choisissez le compte par défaut

Résultat :

  • Connectez vous sur votre VSTS
  • Allez dans Settings\Agent Queues

Vous  devez voir le résultat suivant :

Configuration_agent_Resultat_VSTS

Vous pouvez utiliser maintenant le pool Interne dans la configuration de vos Builds.

Note : Sur un serveur de build, vous pouvez installer plusieurs agent de build, il faut juste les utiliser des répertoires d’installation différents.

Création d’un Build VSTS

  • Connectez sur VSTS
  • Allez dans « Build & Realease »/Builds

 

Create_Build_1.png

  • Cliquez sur New

Create_Build_2

  • Sélectionnez le Template ASP.NET et cliquez sur « Apply »

Create_Build_3

  • Allez dans l’onglet « Triggers » pour activer l’intégration continue
  • Préciser la branche à surveiller

 

Create_Build_4.png

  • Allez dans l’onglet « Options » pour sélectionner l’agent interne précédemment créé

Create_Build_5.png

  • Allez dans l’onglet « Variables » Afin de modifier :
    • BuildConfiguration selon la configuration souhaitez (Release, Debug, ou une que vous avez créée dans votre solution). Dans notre cas, nous avons créé la configuration de Build « Developpement »

Create_Build_Variables

  • Cliquez sur « Save »

Save_Build_6.png

  • Cliquez sur « Save »

 

Déploiement automatique

L’objectif est déployer automatiquement sur le serveur cible IIS, le dernier package compilé par le pool de build que nous venons de créer avec l’agent « BuildInterne ».

Configuration du serveur Cible

Dans notre cas nous effectuons le déploiement automatique sur un serveur web IIS Windows server 2012 R2.

Sur ce dernier, il faut activer WinRM : Remote Management.

WinRM

Pour activer WinRM sur Windows Server 2012 :

  • Connectez vous sur le serveur Cible
  • Ouvrir sur le serveur Server Manager > Local Server

Server_manager

Site Web Cible

Dans notre exemple, nous allons mettre en place un déploiement automatique sur un site web existant que nous allons créer sur le serveur manuellement. mais cette action est parfaitement automatisable dans le processus de déploiement automatique.

  • Connectez vous sur le serveur cible
  • Ouvrir sur le serveur IIS manager
  • Ajouter un nouveau site Web, par exemple : WebInterne sur un port libre

Note : Pensez bien à ouvrir le port utilisé sur le serveur cible

Créer une release

  • Connectez sur VSTS
  • Allez dans « Build & Realease »/Releases

Create_release_1

  • Cliquez sur « Create release definition »

Create_release_2

  • Sélectionnez Empty et cliquez sur Next
 Create_release_3
  • Sélectionnez votre projet
  • Sélectionnez la définition du build que vous avez précédemment créé
  • Cochez « Continuous Deployement », pour le déploiement s’exécute à chaque nouveau build.
  • Cliquez sur Create

Ajout de tâches

Maintenant que la release a été créé vide, il faut lui ajouter des tâches à exécuter.

Task_release_1.png

  • Modifier le nom de la Release : Interne
  • Liez la release à un package de déploiement :
    • Cliquez sur « Link to an artifact source « Link_to_artefact_release_2.png
  • Sélectionnez la queue de déploiement :
    • Celle que nous avons précédemment créée : Interne

Select_Queue_release_2.png

  • Cliquez sur « Add tasks » :
    • Windows Machine File Copy
      • Ajouter la tache « Windows Machine File Copy », afin de copier le package à déployer sur le serveur cible.Add_Task_Copy_files.PNG
      • Configurez cette tache de copie :
        • Sélectionner le fichier compresser dans le package précédemment lié,Select_Package_release_3
        • Select_Package_release_4
        • Select_Package_release_5
    • WinRM – IIS Web Deployment
      • Ajoutez la tache « WinRM – IIS Web Deployment »Add_Task_Web App Deployment
        • Si cette dernière n’est pas disponible, il faut ajouter l’extension à votre VSTS depuis le market Visual Studio ici.
      • Configurez la tache :
        • WinRM :
          •  Machines : Nom réseau du serveur cible
          • Admin Login/Password : Login et Password d’un compte admin sur le serveur cible
          • Protocol : Sélectionnez le protocole utilisez par le WinRM sur le serveur cible
        • Deploiement :
          • Web Deploy Package : Nom et chemin de copie du package de déploiement, définie dans la tache précédente
          • Website Name : Name du site web précédemment créé : InterneWebApp_release_.png
  • Allez dans l’onglet « Triggers » :
    • Cochez « Continuous Deployment »
    • Sélectionnez le nom du build à surveiller et pour quelle branche le build a été effectuéTriggers_release_.png

 

 

Activating Office 365 groups is not an option

 I see a lot of different reactions when it comes to Office 365 groups. But there is one thing that unites close from every IT department I know : deactivate it, hide it ! (and pray for users to not get aware of its existence). Caricature ? Maybe

Reasons ?

  • We are not ready
  • We can’t control everything yet
  • We have difficulties matching every tool with a use case
  • We only want users to have the tools we decided they needed

Here is why those arguments are irrelevant in my opinion.

growefficiently-turntocloud

Everything is built upon groups

This is SaaS. Microsoft is deciding (with feedback from users of course) what the future is. And Microsoft decided the future will be built upon groups.

Lots of features have rolled out, some others are rolling out right now. And not only for first release customers. The only way to go back to good old SharePoint Sites : stop Office 365.

We are building technical debt

Technical debt is something we fight a lot as developers. Every problem, every bug that we just hide will come back one day. And it will be far more difficult to fix. We all know what happens when we shut down some work because we have other priorities. We will always have other priorities.

One day we will have to activate it,  whether there is no more ability to stop it or because no tool works without. And it is going to hurt. A lot.

This is the perfect moment to prepare

Features are rolling out, one by one. It is easy to stay tuned, news after news. We still have time. The perfect moment to try it. To get feedback from users. To check which tools we prefer for our usages. To build our adoption strategy. We may be surprised, as users may prefer groups to the big SharePoint template we spent months building.

All the time we spend creating SharePoint branding that will be obsolete in 6 months. All the energy we spend trying to hide O365 features. We could be monitoring usages, communicating, building adoption strategies, building tools to do what groups don’t do.

What’s wrong with giving tools to users ?

Everything that is rolling out now is just a set of extra tools users can have. Why so much hate ? It’s like IT departments want to tell people a list of scenarios with a tool attached to each one. We want to control everything. We are control freaks. There are reasons we build procedures, of course. But sometimes we should let go.

Maybe users know better than us what they need. Maybe they are used to massive tooling in there private life. Maybe they are now using Slack + Dropbox + Trello and have the same features but without you monitoring it. Maybe they just need a go and will be delighted to use those tools. Maybe not. The best way to know is to go for it, and get feedback. Feedback, iterative or lean strategies are not only for startups !

What is the worst case scenario ?

Users creating an important amount of groups, and information is not well organized ? Users creating groups they shouldn’t create ? Users uploading top secret documents in external groups ?

We can have power users with a global vision of the team and responsible for organizing work and data. Those people are already organizing other content in their team.

We can classify groups to remind users about what content should be in each group. We will soon be able to apply policies with each classification.

Users are adults with responsibility on their actions. We don’t block  every action that could be possibility dangerous if maybe someone was doing something very stupid. Or it is not yet the case in my country. We should stop doing that in IT every day, treating users as brainless children.

If you really need to ensure some documents will never ever possibly get out of your company, there are true solutions.

So what now ?

We don’t have to use everything now. We don’t have to love every feature. We don’t have to stop everything we had and migrate to groups. But we have to get used to them. And I think we will finally love using them.