Quantcast
Channel: Analysis Services Team Blog
Viewing all 69 articles
Browse latest View live

On Demand Webinar “What’s New for SQL Server 2016 Analysis Services” available now

$
0
0

This week I did a live webinar on what is new in  SQL Server 2016 Analysis Services:

Everything you want to know about SQL Server 2016 SQL Server Analysis Service. In this session we will look at how Analysis services tabular models simplify solving complex business problems using over 50 new DAX functions and how new relationship types can help solve many to many issues. At the same time improved performance allows faster loading and analyzing of data. Finally updated tools allow for increased BI developer productivity and ease of use in Visual Studio 2015.

You can now watch the webinar on demand here.


Bidirectional cross-filtering whitepaper

$
0
0

Bidirectional cross-filtering is a new feature for SQL Server 2016 Analysis Services and Power BI Desktop that allows modelers to determine how they want filters to flow for data using relationships between tables. In SQL Server 2014, filter context of a table is based on the values in a related table. With bidirectional cross-filtering the filter context is propagated to a second related table on the other side of a table relationship. This can help you solve the many-to-many problem without writing complicated DAX formula’s.

We released a whitepaper that describes this feature in details that you can find it here. The whitepaper covers in details how the feature works and how it solves problems like the traditional many-to-many scenario but also points out other interesting use cases like using it for a date table and how it solves dynamic security much easier then before.

Introducing Tabular Model Explorer for SQL Server Data Tools for Analysis Services Tabular Projects (SSDT Tabular)

$
0
0

If you download and install the August 2016 release of SQL Server Data Tools (SSDT) for Visual Studio 2015, you can find a new feature in SSAS Tabular projects, called Tabular Model Explorer, which lets you conveniently navigate through the various metadata objects in a model, such as data sources, tables, measures, and relationships. It is implemented as a separate tools window that you can display by opening the View menu in Visual Studio, pointing to Other Windows, and then clicking Tabular Model Explorer. The Tabular Model Explorer appears by default in the Solution Explorer area on a separate tab, as illustrated in the following screenshot.

Tabular Model Explorer

As you no doubt will notice, Tabular Model Explorer organizes the metadata objects in a tree structure that closely resembles the schema of a tabular 1200 model. Data Sources, Perspectives, Relationships, Roles, Tables, and Translations correspond to top-level schema objects. But there are also exceptions, specifically for KPIs and Measures, which technically aren’t top-level objects yet child objects of the various tables in the model. However, having consolidated top-level containers for all KPIs and Measures makes it easier to work with these objects, especially if your model includes a very large number of tables. Of course, the measures are also listed under their corresponding parent tables, so that you have a clear view of the actual parent-child relationships. And if you select a measure in the top-level Measures container, the same measure is also selected in the child collection under its table—and vice versa. Boldface font calls out the selected object, as the following side-by-side screenshots illustrate for selecting a measure at the top level (left) versus the table level (right).

measuresselected

As you would expect, the various nodes in Tabular Model Explorer are linked to appropriate menu options that until now were hiding under the Model, Table, and Column menus in Visual Studio. It no doubt is easier to edit a data source by right-clicking on its object in Tabular Model Explorer and clicking Edit Data Source versus opening the Model menu, clicking on Existing Connections, and then selecting the desired connection in the Existing Connections dialog box, and then clicking Edit. This is great, even though not all treeview nodes have a context menu yet. Namely the top-level KPIs and Measures containers don’t yet have a menu while the Perspectives container does but its child objects do not. We will add further options in subsequent releases, including completely new commands that now make perfect sense in the context of an individual metadata object.

The same can be said for the Properties window. If you select a table, column, or measure in Tabular Model Explorer, SSDT populates the Properties window accordingly, but if you select a data source, relationship, or partition, SSDT does not and leaves the Properties window empty, as shown in the next screenshot comparison. This is simply because SSDT never had to populate the Properties window for the latter types of metadata objects before. Subsequent SSDT releases will provide more consistency and enable even more convenient editing scenarios through the Properties window. We just did not want to wait another one or two months with the initial Tabular Model Explorer release.

Properties window

The initial version already goes beyond what was previously possible in SSDT Tabular. For example, assume you have a very large number of measures in a model. Navigating through these measures in the Measure Grid can be tedious, yet Tabular Model Explorer offers a convenient search feature. Just type in a portion of the name in the Search box and Tabular Model Explorer narrows down the treeview to the matches. Then select the measure object and SSDT also selects the measure in the Measure Grid for editing. It’s a start to say good bye to Measure Grid frustration!

TME search

But wait, there is more! The Tables node and the Columns and Measures nodes under each table support sorting. The default is Alpha Sort, which lists the object alphabetically for easy navigation, but if you’d rather list the objects based on their actual order in the data model, just right-click the parent node and select Model Sort. In most cases, Alpha Sort is going to be more useful, but if you need Model Sort on other parent nodes as well, such as Hierarchies and Partitions, let us know and we’ll add it in a subsequent release.

Note also that the Tabular Model Explorer is only available for the tabular 1200 compatibility level or later. Models at compact level 1100 or 1103 are not supported because Tabular Model Explorer is based on the new Tabular Object Model (TOM).

And that’s about it for a whirlwind introduction of Tabular Model Explorer in SSDT Tabular. We hope you find this new feature useful, especially if your models are complex and contain a very large number of tables, columns, partitions, measures, and other metadata objects. Give it a try, send us your feedback through Microsoft Connect, community forums, or as comments to this blog post, and let us know what other capabilities you would like us to add. Import/export of selected objects? Drag and drop support? And stay tuned for even more capabilities coming to an SSDT Tabular workstation near you in the next monthly releases!

 

 

Introducing Integrated Workspace Mode for SQL Server Data Tools for Analysis Services Tabular Projects (SSDT Tabular)

$
0
0

When you work with the model designer in SSDT Tabular, you are working with a temporary Analysis Services database that SSDT Tabular automatically loads on a workspace server. During the initial project creation, you must point SSDT Tabular to the desired workspace server, which must be a tabular Analysis Services instance that you can control with full permissions of a SSAS Administrator. In a typical configuration, you would deploy a workspace server on the local computer running SSDT Tabular. Yet, having to run SQL Server Setup in order to deploy a full SSAS instance in Tabular mode just for development purposes is burdensome. Now there is a better way—Integrated workspace mode!
The following screenshot shows the Tabular Model Designer dialog box displayed when creating a new tabular project by using the SSDT September release. Note the Integrated Workspace option. If you select it, SSDT Tabular no longer requires an explicit workspace server. Instead, it relies on its own internal Analysis Services instance.
integratedworkspace

In integrated workspace mode, SSDT Tabular dynamically starts its internal SSAS instance in the background and loads the database so that you can add and view tables, columns, and data in the model designer, and if you add additional tables, columns, relationships, and so forth, you are automatically modifying the workspace database, as well. Integrated workspace mode does not change how SSDT Tabular works with a workspace server and  database. What changes is where SSDT Tabular hosts the workspace database.

For existing tabular projects that currently use an explicit workspace server, you can switch to integrated workspace mode by setting the Integrated Workspace Mode parameter to True in the Properties window, which is displayed when you select the Model.bim file in Solution Explorer, as highlighted in the following screenshot. Note that the Integrated Workspace Mode option does not let you configure any other workspace settings. SSDT Tabular uses default values for Workspace Database, Workspace Retention, and Workspace Server.

connecting to workspace

 

The Workspace Database and Workspace Server settings let you discover the name of the temporary database and the TCP port of the internal SSAS instance where SSDT Tabular hosts this database. By using this information, you can connect to the workspace database with SSMS or any other tool—as long as SSDT Tabular has the database loaded. The Workspace Retention setting, on the other hand, specifies that SSDT Tabular keeps the workspace database on disk, but no longer in memory after a model project is closed. This ensures a good user experience while consuming less memory than if the model was kept in memory at all times. If you want to control these settings, set the Integrated Workspace Mode property to False and then provide an explicit workspace server. An explicit workspace server would also make sense if the data you are importing into a model exceeds the memory capacity of your SSDT workstation. You can continue to use your existing workspace server and this is still fully supported.
The integrated workspace server is basically equivalent to the Developer edition of SQL Server Analysis Services, so you can try out advanced features, such as DirectQuery mode, which are typically only available with the Enterprise edition. Also note that SSDT Tabular will ship the latest version of the Analysis Services engine with every monthly release, so you automatically get the latest updates and new capabilities. However, keep in mind that the ultimate deployment target in your production environment must support the capabilities you use in the model. For example, if your production server runs Standard edition, then you will not be able to deploy a model that uses Enterprise-only features. If you use integrated workspace mode, make sure you test the deployment on a reference server early on to ensure your model is compatible with your production servers.

Another aspect worth mentioning is that the integrated workspace server is a 64-bit Analysis Services instance, while SSDT Tabular runs in a 32-bit environment of Visual Studio. Hence, if you are connecting to special data sources, make sure you install both the 32-bit and 64-bit versions of the corresponding data providers on your workstation. The 64-bit provider is required for the 64-bit Analysis Services instance and the 32-bit version is required for the Table Import Wizard in SSDT Tabular.

And that’s it for a brief introduction of Integrated Workspace Mode. We hope you find this new capability useful. Give it a try, send us your feedback through Microsoft Connect, community forums, or as comments to this blog post, and let us know what other capabilities you would like us to add to SSDT Tabular. As always, stay tuned for even more improvements in upcoming monthly SSDT releases.

Introducing Azure Analysis Services

$
0
0
We are pleased to announce the availability of Microsoft Azure Analysis Services preview.
With Azure Analysis Services, BI professionals can create BI semantic models based on data that resides in the cloud or on-premises, whether that’s SQL Server, Azure SQL Database, Azure SQL Data Warehouse or other data sources to provide business users with a simplified view over their data. Business users can then choose their preferred data visualization tool such as Power BI, Excel, or others, to analyze their data.
To learn more please read the blog post here.

Improving the Measure Grid in SSDT Tabular

$
0
0

As the name implies, the measure grid is an SSDT Tabular feature to define and manage measures, as illustrated in the following screenshot. It is available for each table when you work in Data View in Tabular Model Designer. You can toggle it on and off by using the Show Measure Grid option on the Table menu.

Old Measure Grid UI

The measure grid is not without shortcomings and receives a fair share of customer feedback. Among other things, drag-and-drop or copy-and-paste operations are currently not supported. It is also hard to locate the measure you want if your table has many measures because the grid does not sort the measures alphabetically and clips their names if the cell size is too small, which it usually is. You can increase the cell widths, but that also increases the widths of the table columns above, which is not great either. You can see the effect in the previous screenshot.

Tabular Model Explorer (TME), introduced with the August release of SSDT, helps to alleviate some of these shortcomings because TME displays all metadata objects in a sortable treeview, including measures and KPIs. We are also planning to add drag-and-drop as well as copy-and-paste operations in a future release. The measure grid, on the other hand, might not see the same improvements because we are considering to replace it in the midterm. In the meantime, however, we do want to address your valuable feedback. So, the October release of SSDT Tabular includes some very targeted improvements to deliver a more user-friendly measure grid experience. Check out the screenshot below. As you can see, the grid now adjusts the cell height and cell width automatically to avoid clipping the measure names, thus making it easier to navigate through the measures without affecting the widths of the table columns above too much.

New Measure Grid UI

Of course, this is only a small improvement, but the big question is if you’d like us to continue improving the measure grid or if you’d rather want us to replace it with a completely different and hopefully better alternative? Please don’t hesitate to let us know. We want to guide our investments in SSDT Tabular based on what will help you be the most productive and help you deliver great solutions to your customers. While it will take some time to deliver on all of the feedback and feature requests, we will make updates each month and work against the backlog in the order of priority based on your input. Your feedback is essential for making SSDT Tabular better — be it for new features, existing features, or entirely missing capabilities. So send us your suggestions on UserVoice or MSDN forums and influence the evolution of SSDT Tabular to the benefit of all our customers. Thank you in advance for taking the time to provide input and stay tuned for the upcoming monthly releases of SSDT Tabular with even more exciting improvements!

Improving Analysis Services Performance and Scalability with SQL Server 2016 Service Pack 1

$
0
0

SQL Server 2016 Analysis Services delivered numerous performance improvements over previous releases, such as better modeling performance thanks to the new 1200 compatibility level for tabular databases, better processing performance for tables with multiple partitions thanks to parallel partition processing, and better query performance thanks to additional DAX functions that help to optimize the client/server communication. And with SQL Server 2016 Service Pack 1 (SP1), Analysis Services can deliver even more performance and scalability improvements through NUMA awareness and optimized memory allocation based on Intel Threading Building Blocks (Intel TBB), helping customers to lower Total Cost of Ownership (TCO) by supporting more users on fewer, more powerful enterprise servers.

SQL Server 2016 SP1 Analysis Services features improvements in these key areas:

  • NUMA awareness – For better NUMA support, the in-memory (VertiPaq) engine inside Analysis Services SP1 maintains a separate job queue on each NUMA node. This means that the segment scan jobs run on the same node where the memory is allocated for the segment data. Note, NUMA awareness is only enabled by default on systems with at least four NUMA nodes. On two-node systems, the costs of accessing remote allocated memory generally doesn’t warrant the overhead of managing NUMA specifics.
  • Memory allocation – Analysis Services SP1 uses an Intel TBB-based scalable allocator that provides separate memory pools for every core. As the number of cores increases, the system can scale almost linearly.
  • Heap fragmentation – The Intel TBB-based scalable allocator is also expected to help mitigate performance problems due to heap fragmentation that have been shown to occur with the Windows Heap. For more information, see the Intel TBB product brief at https://software.intel.com/sites/products/collateral/hpc/tbb/Intel_tbb4_product_brief.pdf.

Microsoft internal performance and scalability testing shows significant gains in query throughput when running SQL Server 2016 SP1 Analysis Services on large multi-node enterprise servers in comparison to previous Analysis Services versions. Please note that results may vary depending on your specific data and workload characteristics.

Download SQL Server 2016 SP1 from the Microsoft Download Center at https://www.microsoft.com/en-us/download/details.aspx?id=54276 and see for yourself how you can scale with Analysis Services SP1. Also, be sure to stay tuned for more blog posts and white papers covering these exciting performance and scalability improvements in more detail.

How SQL Server 2016 and Power BI empower Mediterranean Shipping Company (MSC)

$
0
0

Here’s a great case study about Mediterranean Shipping Company optimizing their business processes by taking advantage of the SQL Server 2016 Database Engine, Analysis Services, and Power BI. If you are particularly interested in Analysis Services, you might enjoy reading the section about complying with new Safety of Life at Sea Convention (SOLAS) requirements, which implied that MSC must monitor in real-time about 130,000 to 140,000 containers per week moving around the world on every ocean carrier, every shipper, and every terminal—and Analysis Services Tabular provided the foundation! If you are interested in Power BI, read how MSC depends on the integration of Analysis Services with Power BI and note that the use of (quote) “Power BI is exploding at MSC.” And if you are interested in all the technologies working together to unlock tremendous performance gains and other benefits, just read the whole story top to bottom. Enjoy!


Introducing a Modern Get Data Experience for SQL Server vNext on Windows CTP 1.1 for Analysis Services

$
0
0

Starting with SQL Server vNext on Windows CTP 1.1, Analysis Services features a modern connectivity stack similar to the one that users already appreciate in Microsoft Excel and Power BI Desktop. You are going to be able to connect to an enormous list of data sources, ranging from various file types and on-premises databases through Azure sources and other online services all the way to Big Data systems. You can perform data transformations and mashups directly into a Tabular model. You can also add data connections and M queries to a Tabular model programmatically by using the Tabular Object Model (TOM) and the Tabular Model Scripting Language (TMSL). The modern Get Data experience is adding exciting data access, transformation, and enrichment capabilities to Tabular models.

Taking a First Glance

In sync with the SQL Server vNext CTP 1.1 release, the December release of SSDT 17.0 RC2 for SQL Server vNext CTP 1.1 Analysis Services (SSDT Tabular) ships with a preview of the modern Get Data experience. You don’t necessarily need to deploy a CTP 1.1 instance of Analysis Services because the Integrated workspace mode in SSDT Tabular relies on and includes the same Analysis Services engine. You can choose that for taking a quick look at the new connectivity stack. To learn more about Integrated workspace mode, check out the blog article Introducing Integrated Workspace Mode for SQL Server Data Tools for Analysis Services Tabular Projects (SSDT Tabular).

Note that this SSDT Tabular release for CTP 1.1 is an early preview for evaluating the vNext capabilities of Analysis Services delivered with the 1400 compatibility level. It is not supported in production environments. Also, only install the Analysis Services, but not the Reporting Services and Integration Services components. Note also that upgrades from previous SSDT versions are not supported. Either install on a newly installed computer or VM or uninstall any previous versions first. Also, only work with Tabular 1400 models using this preview version of SSDT. For Multidimensional as well as Tabular 1100, 1103, and 1200 models, use SSDT version 16.5.

After downloading and installing the December release of SSDT that supports SQL Server vNext CTP 1.1, create a new Analysis Services Tabular Project. In the Tabular Model Designer dialog, make sure you select the SQL Server vNext (1400) compatibility level. The modern Get Data experience is only available at compatibility level 1400. Tabular 1200 models continue to use the legacy connectivity stack available with SQL Server 2016 and previous releases.

tabularmodeldesigner

Figure 1   Creating a Tabular 1400 model to use the modern Get Data experience

Note: If you are using a previous version of Analysis Services as your workspace server or a previous version of SSDT Tabular in integrated workspace mode, you will not be able to create Tabular 1400 models or use the modern Get Data experience.

Once you’ve created a Tabular 1400 model, click the Model menu or right-click on Data Sources in Tabular Model Explorer and then click Import from Data Source. In Tabular Model Explorer, you can also click New Data Source. The difference between these two commands is that Import from Data Source leads you through both the definition of a data source and the import of data into one or multiple tables, while the New Data Source command only creates a new data source definition. In a subsequent step, you would right-click the resulting data source object and choose Import New Tables. Either way, the two commands display the same Get Data dialog box similar to the version you see in Power BI Desktop.

getdatadlg

Figure 2   Importing data into a Tabular 1400 model through the modern Get Data experience

Don’t be disappointed when you see a rather short list of data sources in the Get Data dialog box. CTP 1.1 is an early preview and exposes only a small set of tested options. Our plan for the SQL Server vNext release is to provide the same list of data sources that Power BI Desktop already supports, so the list will grow with subsequent CTPs.

The steps to create a data source are the same as in Power BI Desktop. However, an important difference is noticeable in the Query Editor window that appears when you import one or more tables from a data source. Apart from the fact that the Query Editor window features a toolbar consistent with the Visual Studio user interface, instead of a collection of ribbons, you might notice that the Merge Queries and Append Queries commands are missing. These commands will be available in a subsequent CTP when SSDT implements full support for shared queries.

queryeditor

Figure 3   The Query Editor dialog box in SSDT Tabular when importing tables into a Tabular 1400 model

For now, each table you choose to import in the Navigator window translates into an individual query in the Query Editor window, which will result in a corresponding table in the 1400 model when you click on Import in the Query Editor toolbar. Of course, you can define data transformation steps prior to importing the data, such as split columns, hide columns, change data types, and so on. Or, click on the Advanced Editor button (right next to Import on the toolbar) to display the Advanced Editor window, which lets you modify the import query in an unconstrained way based on the M query language. You can resize and maximize the Query Editor and Advanced Editor windows if necessary. Just be careful with advanced query authoring because SSDT does not yet capture all possible query errors. For the CTP 1.1 preview, a better approach might be to create and test advanced queries in Power BI Desktop and then paste the results into the Advanced Editor window in SSDT Tabular.

advancedmashup

Figure 4   The Advanced Editor window is available to define advanced M queries

If you choose to copy queries from Power BI Desktop, note how the Source statement in Figure 4 refers to the AS_AdventureWorksDW data source object defined in the Tabular model. Instead of referring to the source directly by using a statement such as Source = Sql.Databases(“<Name of SQL Server>”), M queries in Analysis Services can refer to a data source by using a statement such as Source = <Name of Data Source Object>. It’s relatively straightforward to adjust this line after posting a Power BI Desktop query into the Advanced Editor window.

Referring to data source objects helps to centralize data source settings for multiple queries and simplifies deployments and maintenance if data source definitions must be updated later on. When updating a data source definition, all M queries that refer to it automatically use the new settings.

Of course, you can also edit the M query of a table after the initial import. Just display the table properties by clicking on Table Properties in the Table menu or in the context menu of Tabular Model Explorer after right-clicking the table. In the CTP 1.1 preview, the Edit Table Properties dialog box immediately shows you the advanced view of the M query, but you can click on the Design button to launch the Query Editor window and apply changes more conveniently (see Figure 5). Just be cautious not to rename or remove any columns in the M source query at this stage. In the CTP 1.1 preview, SSDT doesn’t yet handle the remapping of source columns to table columns gracefully in tabular models. If you need to change the names, order, or number of columns, delete the table and recreate it from scratch or edit the TMSL code in the Model.bim file directly.

tableproperties

Figure 5   Editing an existing table in a Tabular 1400 model via Table Properties

One very useful scenario for editing an M source query without changing column mappings revolves around the definition of multiple partitions for a table. For example, by using the Table.Range M function, you can define a subset of rows for any given partition. Table 1 and Figure 6 show a partitioning scheme for the FactInternetSales table that relies on this function. You could also define entirely different M queries. As long as a partition’s M query adheres to the column mappings of the table, you are free to perform any transformations and pull in data from any data source defined in the model. Partitioning is an exclusive Analysis Services feature. It is not available in Excel or Power BI Desktop.

Table 1   A simple partitioning scheme for the AdventureWorks FactInternetSales table based on the Table.Range function

Partition M Query Expression
FactInternetSalesP1 let

Source = AS_AdventureWorksDW,

dbo_FactInternetSales = Source{[Schema=”dbo”,Item=”FactInternetSales”]}[Data],

#”Kept Range of Rows” = Table.Range(dbo_FactInternetSales,0,20000)

in

#”Kept Range of Rows”

FactInternetSalesP2 let

Source = AS_AdventureWorksDW,

dbo_FactInternetSales = Source{[Schema=”dbo”,Item=”FactInternetSales”]}[Data],

#”Kept Range of Rows” = Table.Range(dbo_FactInternetSales,20000,20000)

in

#”Kept Range of Rows”

FactInternetSalesP3 let

Source = AS_AdventureWorksDW,

dbo_FactInternetSales = Source{[Schema=”dbo”,Item=”FactInternetSales”]}[Data],

#”Kept Range of Rows” = Table.Range(dbo_FactInternetSales,40000,20000)

in

#”Kept Range of Rows”

FactInternetSalesP4 let

Source = AS_AdventureWorksDW,

dbo_FactInternetSales = Source{[Schema=”dbo”,Item=”FactInternetSales”]}[Data],

#”Kept Range of Rows” = Table.Range(dbo_FactInternetSales,60000,20000)

in

#”Kept Range of Rows”

partitioning

Figure 6   A simple partitioning scheme based on the Table.Range function

Upgrading a Tabular Model to the 1400 Compatibility Level

The modern Get Data experience is one of the key features of the 1400 compatibility level. Others are support for ragged hierarchies and detail rows. As long as your workspace server is at the CTP 1.1 level, you can upgrade Tabular 1200 models to 1400 in SSDT by changing the Compatibility Level in the Properties window, as illustrated in Figure 7. Just remember to take a backup of your Tabular project prior to the upgrade because the compatibility level cannot be downgraded afterwards.

upgrade

Figure 7  Upgrading a Tabular 1200 model to the 1400 compatibility level. Downgrade is not supported.

If you are planning to upgrade a Tabular 1103 (or earlier) model to 1400, make sure you upgrade first to the 1200 compatibility level. In the CTP 1.1 preview, SSDT is not yet able to upgrade these older models to 1400 directly. Like all other known issues, we plan to address this in one of the next preview releases. Also, be sure to see the Known Issues in CTP 1.1 section later in this article.

Working with Legacy and Modern Data Sources

By default, SSDT creates modern data source definitions in Tabular 1400 models. On the other hand, if you upgrade a 1200 model, the existing data source definitions remain unchanged. For these existing data source definitions, known as provider data sources, SSDT currently continues to show the legacy user interface. However, the plan is to replace the legacy interface with the modern Get Data experience. Furthermore, importing new tables from an existing provider data source brings up the legacy user interface. Importing from a modern data source brings up the modern Get Data experience.

In the CTP 1.1 preview specifically, you can configure SSDT to enable the legacy user interface even for creating new data sources by setting a DWORD registry parameter called Enable Legacy Import to a value of 1, as in the following Registration Entries (.reg) file. This might be useful if you only want to try out certain tabular 1400 specific features such as detail rows without switching to modern data source definitions.  After setting the Enable Legacy Import parameter to 1, you can find additional commands in the data source context menu in Tabular Model Editor. You can use these commands to create and manage provider data sources (see Figure 8). Setting this parameter to any other value than 1 or removing it altogether disables these additional commands again.

Windows Registry Editor Version 5.00

[HKEY_CURRENT_USER\Software\Microsoft\Microsoft SQL Server\14.0\Microsoft Analysis Services\Settings]

“Enable Legacy Import”=dword:00000001

legacyimport

Figure 8   Enabling legacy data import commands in the CTP 1.1 preview release of SSDT Tabular

Regardless of the user interface, Table 2 lists the various data connectivity related objects that can coexist in a Tabular 1400 model. Ideally, you can mix and match any data source type with any partition source type, but there are limitations in the CTP 1.1 preview. For example, it should be possible to create a partition with an M expression over an existing provider data source. This does not work yet. Equally, it should be possible to have a partition with a native query over a modern data source. This can be accomplished programmatically or in TMSL, but processing such a query partition fails in SSDT with an error stating the data source is of an unsupported type for determining the connection string. This is an issue in the December 2016 release of SSDT Tabular, but processing succeeds in SSMS (see the Working with a Tabular 1400 Model in SSMS section later in this article). For the CTP 1.1 preview, we recommended you use query partitions over legacy (provider) data sources and M partitions over modern (structured) data sources. In a later preview release, you will be able to mix and match these resources more freely so you don’t have to create redundant data source definitions for models that contain both query and M partitions.

Table 2   Data Source and corresponding partition types supported in Tabular 1400 models

Level Data Source Type Partition Type Source Query Type
1200 and 1400 Provider Data Source Query Partition Native Query, such as T-SQL
1400 only Structured Data Source M Partition M Expression

Working with a Tabular 1400 Model in SSMS

SQL Server Management Studio (SSMS) does not yet provide a user interface for the modern Get Data experience, but don’t let that stop you from managing your Tabular 1400 models. Although you cannot yet change the settings of a modern data source in the Connection Properties dialog box or conveniently manage partitions for a table, you can script out the desired objects and apply your changes in the TMSL code (Be sure to also read the Working with TOM and TMSL section later in this article). Just right-click the desired object, such as a modern data source, click on Script Connection as, and then choose any applicable option, such as Create or Replace To a New Query Editor Window, as shown in Figure 9.

scriptout

Figure 9   Scripting out a modern data source

You can also script out tables and roles, process the database or individual tables, and perform any other management actions as you would for Tabular 1200 models.

Working with TOM and TMSL

In addition to the metadata objects you already know from Tabular 1200 models, 1400 models introduce three important new object types: StructuredDataSource, MPartitionSource, and NamedExpression. The StructuredDataSource type defines the properties that describe a modern data source. MPartitionSource takes an M expression as the source query and can be assigned to the Source property of a partition. And, NamedExpression is a class to define shared queries. SSDT does not yet support shared queries, but the AS engine and TOM already do. Creating and using shared queries programmatically is going to be the subject of a separate article.

Editing the Model.bim file

Whenever you cannot perform a desired action in the user interface of SSDT, consider switching to Code View and performing the action at the TMSL level. For example, SSDT does not yet support renaming of modern data sources. If you don’t find the default name assigned to a data source intuitive, such as Query1, switch to Code View, and then perform a Find and Replace operation. Keep in mind that expressions in M partition sources refer to modern data sources by name, so don’t forget to update these expressions together with the data source name. Figure 10 shows an example. Also, as always, make sure you first backup the Model.bim file before editing it manually.

renamedatasource

Figure 10   Updating the data source reference in an M expression

After changing data source properties and affected M expressions, switch back to Designer View and process the affected tables to ensure the model is still in a consistent state. If you receive an error stating “The given credential is missing a required property. Data source kind: SQL. Authentication kind: UsernamePassword. Property name: Password. The exception was raised by the IDbConnection interface.” you could switch back to Code View and provide the missing password although it is usually easier to use the user interface via the Edit Permissions command on the data source object in Tabular Model Explorer. If you prefer the Code View, use the following TMSL code as a reference to provide the missing password for a modern (structured) data source.

{

“type”: “structured”,

“name”: “AdventureWorks2014DWSDS”,

“connectionDetails”: {

“protocol”: “tds”,

“address”: {

“server”: “<Server Name>”,

“database”: “AdventureWorksDW2014”

},

“authentication”: null,

“query”: null

},

“credential”: {

“AuthenticationKind”: “UsernamePassword”,

“kind”: “SQL”,

“path”: “<Server Name>”,

“Username”: “<User>”,

          Password “: “<Password>”,

“EncryptConnection”: true

}

}

Note: For security reasons, Analysis Services does not return sensitive information such as passwords when scripting out a Tabular model or tracing commands and responses in SQL Profiler. Even though you don’t see the password, the server may have it and can perform processing successfully. You only need to provide the password if an error message informs you that it is missing.

Working with Tabular 1400 models programmatically

If you want to work with modern data sources and M partitions programmatically, you need to use the CTP 1.1 version of Analysis Management Objects (AMO). The AMO libraries are part of the SQL Server Feature Pack, yet a Feature Pack for CTP 1.1 is not available. As a workaround for CTP 1.1, you can use the server version of the AMO libraries, Microsoft.AnalysisServices.Server.Core.dll, Microsoft.AnalysisServices.Server.Tabular.dll, and Microsoft.AnalysisServices.Server.Tabular.Json.dll. These libraries are included with SSDT. By default, these libraries are located in the C:\Program Files (x86)\Microsoft Visual Studio 14.0\Common7\IDE\PrivateAssemblies\Business Intelligence Semantic Model\LocalServer folder. However, you cannot redistribute these libraries with your client application. For CTP 1.1, this means that your code can only run on a machine with SSDT installed, which should suffice for a first evaluation of the TOM objects for the modern Get Data experience.

Figure 11 shows a sample application that creates a Tabular 1400 model on a server running SQL Server vNext CTP 1.1 Analysis Services. It uses StructuredDataSource and MPartitionSource objects to add a modern data source and an M partition to the model. See the attachment to this article for the full sample code. The ConnectionDetails and Credential properties that you must set for the StructuredDataSource object are not yet documented, but you can glean examples for these strings from a Model.bim file that contains a modern data source. The MPartitionSource object on the other hand takes an M query in its Expression property. As explained earlier in this article, make sure the M query refers to a data source defined in the model by name.

tom

Figure 11   Creating a Tabular 1400 model with a modern data source and a table based on an M partition source programmatically.

Known Issues in CTP 1.1

SQL Server vNext CTP 1.1 provides an early preview of the modern Get Data experience. It is not fully tested and not supported in production environments. The following are known issues in CTP 1.1 Analysis Services and the corresponding SSDT release:

  • The SSDT Tabular release for CTP 1.1 is an early preview for evaluating the vNext capabilities of Analysis Services. It is not supported in production environments and must be installed without the Reporting Services and Integration Services components. Upgrades from previous SSDT versions are not supported. Either install on a newly installed computer or VM or uninstall any previous versions first. Also, only work with Tabular 1400 models. For Multidimensional as well as Tabular 1100, 1103, and 1200 models, use SSDT version 16.5.
  • SSDT does not yet support all required operations on modern data sources and M partitions through the user interface. For example, renaming data source objects or changing the column mappings for a table. It’s also not yet possible to define shared mashups through the user interface. You must edit the Model.bim file manually.
  • SSMS can script out Tabular 1400 models and individual objects, but the user interface is not yet 1400 aware. For example, you cannot manage partitions if the model contains a structured data source and you cannot change the settings of a modern data source through the Connection Properties dialog box. You must script out these objects and apply the changes that the TMSL level.
  • Creating a new tabular project in SSDT by using the option to Import from Server (Tabular) does not work. You get an error message stating the model is not recognized as compatible with SQL Server 2012 or higher. You can script out the database in SSMS and copy the TMSL code into the Model.bim file of an empty Tabular project created from scratch.
  • Erroneous M queries and changes to M queries that affect the column mapping of an existing table after the initial import may cause SSDT Tabular to become unresponsive. If you must change the column mapping, delete and recreate the table.
  • Tables with M partition sources don’t work over legacy (provider) data sources. You must use modern data sources for these tables.
  • Tables with query partition sources don’t fully work over modern data sources. SSDT cannot process these tables. You must process these tables in SSMS or programmatically.
  • Processing individual partitions does not succeed. Process the full model or the table.
  • Direct upgrades of Tabular 1103 or earlier models to the 1400 compatibility level does not finish successfully. You must first upgrade these models to the 1200 compatibility level and then perform the upgrade to 1400.
  • DirectQuery mode is not yet supported at the 1400 compatibility level. To preview the modern Get Data experience, you must import the data into the Tabular model.
  • Out-Of-Line Bindings are not yet supported. It’s not possible to override a structured data source or M partition source on a request basis in a Tabular 1400 model yet.
  • All modern data sources are considered private data sources to avoid disclosing sensitive or confidential information. A private data source is completely isolated from other data sources. The privacy settings for data sources cannot be changed in CTP 1.1.
  • Impersonation options such as ImpersonateWindowsUserAccount are not yet supported for modern data sources. You must specify credentials explicitly when defining the data source.
  • Localization is not supported. CTP 1.1 is available in English (US) only.

Give us Feedback

Your feedback is critical for delivering a high-quality product! Deploy SQL Server vNext CTP 1.1 and the December 2016 release of SSDT Tabular in a lab environment or on a virtual machine in Azure and let us know what you think. Report issues and send us your suggestions to SSASPrev here at Microsoft.com. Or use any other available communication channels such as UserVoice or MSDN forums. You can influence the evolution of the Analysis Services connectivity stack to the benefit of all our customers.

 

What’s new for SQL Server vNext on Windows CTP 1.1 for Analysis Services

$
0
0

The public CTP 1.1 of SQL Server vNext on Windows is available here! This public preview includes the following enhancements for Analysis Services tabular.

  • New infrastructure for data connectivity and ingestion into tabular models with support for TOM APIs and TMSL scripting. This enables:
    • Support for additional data sources, such as MySQL. Additional data sources are planned in upcoming CTPs.
    • Data transformation and data mashup capabilities.
  • Support for BI tools such as Microsoft Excel enable drill-down to detailed data from an aggregated report. For example, when end-users view total sales for a region and month, they can view the associated order details.
  • Enhanced support for ragged hierarchies such as organizational charts and chart of accounts.
  • Enhanced security for tabular models, including the ability to set permissions to help secure individual tables.
  • DAX enhancements to make DAX more accessible and powerful. These include the IN operator and table/row constructors.

New 1400 Compatibility Level

SQL Server vNext CTP 1.1. for Analysis Services introduces the 1400 compatibility level for tabular models. To benefit from the new features for models at the 1400 compatibility level, you’ll need to download and install the December release of SSDT for CTP 1.1. In SSDT, you can select the new 1400 compatibility level when creating new tabular model projects. Models at the 1400 compatibility level cannot be deployed to SQL Server 2016 or earlier, or downgraded to lower compatibility levels.

1400-new-model

Note that this SSDT Tabular release for CTP 1.1 is an early preview for evaluating the vNext capabilities of Analysis Services delivered with the 1400 compatibility level. It is not supported in production environments. Also, only install the Analysis Services, but not the Reporting Services and Integration Services components. Either install on a newly installed computer or VM or uninstall any previous versions first. Also, only work with Tabular 1400 models using this preview version of SSDT. For Multidimensional as well as Tabular 1100, 1103, and 1200 models, use SSDT version 16.5.

New Infrastructure for Data Connectivity

CTP1.1 release introduces a new infrastructure for data connectivity and ingestion into tabular models with support for TOM APIs and TMSL scripting. This is based on similar functionality in Power BI Desktop and Microsoft Excel 2016. There is a lot of information on this topic, so we have created a separate blog post here.

Detail Rows

A much-requested feature for tabular models is the ability to define a custom row set contributing to a measure value. Multidimensional models already achieve this by using the default drillthrough action. This allows end-users to view information in more detail than the aggregated level.

For example, the following PivotTable shows Internet Total Sales by year from the Adventure Works sample tabular model. Users can right-click the cell for 2010 and then select the Show Details menu option to view the detail rows.

show-details

By default, the associated data in the Internet Sales table is displayed. This behavior is often not meaningful to users because the table may not have the necessary columns to show useful information such as customer name and order information.

Detail Rows Expression Property for Measures

CTP1.1 introduces the Detail Rows Expression property for measures. It allows the modeler to customize the columns and rows returned to the end user.

detail-rows-expression

It is anticipated the SELECTCOLUMNS DAX function will be commonly used for the Detail Rows Expression. The following example defines the columns to be returned for rows in the Internet Sales table.

SELECTCOLUMNS(
    'Internet Sales',
    "Customer First Name", RELATED(Customer[Last Name]),
    "Customer Last Name", RELATED(Customer[First Name]),
    "Order Date", 'Internet Sales'[Order Date],
    "Internet Total Sales", [Internet Total Sales]
)

With the property defined and the model deployed, the custom row set is returned when the user selects Show Details. It automatically honors the filter context of the cell that was selected. In this example, only the rows for 2010 value are displayed.

detail-rows-returned

Default Detail Rows Expression Property for Tables

In addition to measures, tables also have a property to define a detail rows expression. The Default Detail Rows Expression property acts as the default for all measures within the table. Measures that do not have their own expression defined will inherit the expression from the table and show the row set defined for the table. This allows reuse of expressions, and new measures added to the table later will automatically inherit the expression.

default-detail-rows-expression

DETAILROWS DAX Function

The DETAILROWS DAX function has been added in CTP1.1. The following DAX query returns the row set defined by the detail rows expression for the measure or its table. If no expression is defined, the data for the Internet Sales table is returned as it is the table containing the measure.

EVALUATE DETAILROWS([Internet Total Sales])

MDX DRILLTHROUGH statements – without a RETURN clause – are also compatible with detail rows expressions defined in tabular models.

Ragged Hierarchies

As described in this article, Analysis Services tabular models can be used to model parent-child hierarchies. Hierarchies with a differing number of levels are referred to as ragged hierarchies. An example of a ragged hierarchy is an organizational chart. By default, ragged hierarchies are displayed with blanks for levels below the lowest child. This can look untidy to users, as shown by this organizational chart in Adventure Works:

ragged-hierarchies-with-blanks

CTP1.1 introduces the Hide Members property to correct this. Simply set the Hide Members property on the hierarchy to Hide blank members.

hide-members-property

Note: It is necessary that the blank members in the model are represented by a DAX blank value, not an empty string.

With the property set and the model deployed, the more presentable version of the hierarchy is displayed.

ragged-hierarchies-clean

Table-Level Security

Roles in tabular models already support a granular list of permissions, and row-level filters to help protect sensitive data. Further information is available here.

CTP1.1 builds on this by introducing table-level security. In addition to restricting access to the data itself, sensitive table names can be protected. This helps prevent a malicious user from discovering that such a table exists.

The current version requires that a whole table’s metadata, and therefore all its columns, is set to be protected. Additionally, table-level security must be set using the JSON-based metadata, Tabular Model Scripting Language (TMSL), or Tabular Object Model (TOM).

The following snippet of JSON-based metadata from the Model.bim file helps secure the Product table in the Adventure Works sample tabular model by setting the MetadataPermission property of the TablePermission class to None.

"roles": [
  {
    "name": "Users",
    "description": "All allowed users to query the model",
    "modelPermission": "read",
    "tablePermissions": [
      {
        "name": "Product",
        "metadataPermission": "none"
      }
    ]
  }

DAX Enhancements

CTP1.1 is compatible with the IN operator for DAX expressions. The TSQL IN operator is commonly used to specify multiple values in a WHERE clause. It feels natural to SQL Server database developers.

Prior to CTP1.1, it was common to specify multi-value filters using the logical OR operator or function. Consider the following measure definition.

Filtered Sales:=CALCULATE(
    [Internet Total Sales],
    'Product'[Color] = "Red"
 || 'Product'[Color] = "Blue"
 || 'Product'[Color] = "Black"
)

This is simplified using the IN operator.

Filtered Sales:=CALCULATE(
    [Internet Total Sales], 'Product'[Color] IN { "Red", "Blue", "Black" }
)

In this case, the IN operator refers to a single-column table with 3 rows; one for each of the specified colors. Note the table constructor syntax using curly braces.

The IN operator is functionally equivalent to the CONTAINSROW function.

Filtered Sales:=CALCULATE(
    [Internet Total Sales], CONTAINSROW({ "Red", "Blue", "Black" }, 'Product'[Color])
)

We hope you will agree the IN operator used with table constructors is a great enhancement to the DAX language. MDX veterans should be jumping out of their seats with excitement at this point. The curly braces syntax should also feel natural to programmers of C based languages like C#, and Excel practitioners who use arrays. But wait, there’s more …

Consider the following measure to filter by combinations of product color and category.

Filtered Sales:=CALCULATE(
    [Internet Total Sales],
    FILTER( SUMMARIZE( ALL(Product), Product[Color], Product[Product Category name] ),
        ( 'Product'[Color] = "Red"   && Product[Product Category Name] = "Accessories" )
     || ( 'Product'[Color] = "Blue"  && Product[Product Category Name] = "Bikes" )
     || ( 'Product'[Color] = "Black" && Product[Product Category Name] = "Clothing" )
    )
)

Wouldn’t it be great if we could use table constructors, coupled with row constructors, to simplify this? In CTP1.1, we can! The above measure is equivalent to the one below.

Filtered Sales:=CALCULATE(
    [Internet Total Sales],
    FILTER( SUMMARIZE( ALL(Product), Product[Color], Product[Product Category name] ),
        ('Product'[Color], Product[Product Category Name]) IN
        { ( "Red", "Accessories" ), ( "Blue", "Bikes" ), ( "Black", "Clothing" ) }
    )
)

Lastly, it is worth pointing out that table and row constructors are independent of the IN operator. They are simply DAX table expressions. Consider the following DAX query.

EVALUATE
UNION(
    ROW(
        "Value1", "Red Product Sales",
        "Value2", CALCULATE([Internet Total Sales], 'Product'[Color] = "Red")
    ),
    ROW(
        "Value1", "Blue Product Sales",
        "Value2", CALCULATE([Internet Total Sales], 'Product'[Color] = "Blue")
    ),
    ROW(
        "Value1", "Total",
        "Value2", CALCULATE([Internet Total Sales], 'Product'[Color] IN { "Red", "Blue" })
    )
)

In CTP1.1, it can be more simply expressed like this:

EVALUATE
{
    ("Red Product Sales",  CALCULATE([Internet Total Sales], 'Product'[Color] = "Red")),
    ("Blue Product Sales"CALCULATE([Internet Total Sales], 'Product'[Color] = "Blue")),
    ("Total",              CALCULATE([Internet Total Sales], 'Product'[Color] IN { "Red", "Blue" }))
}

Download Now!

To get started, download SQL Server vNext on Windows CTP1.1 from here. SSDT for CTP1.1 available here. Be sure to keep an eye on this blog to stay up to date on Analysis Services.

Evaluating Shared Expressions in Tabular 1400 Models

$
0
0

In our December blog post; Introducing a Modern Get Data Experience for SQL Server vNext on Windows CTP 1.1 for Analysis Services, we mentioned SSDT Tabular does not yet support shared expressions, but the CTP 1.1 Analysis Services engine already does. So, how can you get started using this exciting new enhancement to Tabular models now? Let’s take a look.

With shared expressions, you can encapsulate complex or frequently used logic through parameters, functions, or queries. A classic example is a table with numerous partitions. Instead of duplicating a source query with minor modifications in the WHERE clause for each partition, the modern Get Data experience lets you define the query once as a shared expression and then use it in each partition. If you need to modify the source query later, you only need to change the shared expression and all partitions that refer to it to automatically pick up the changes.

In a forthcoming SSDT Tabular release, you’ll find an Expressions node in Tabular Model Explorer which will contain all your shared expressions. However, if you want to evaluate this capability now, you’ll have to create your shared expressions programmatically. Here’s how:

  1. Create a Tabular 1400 Model by using the December release of SSDT 17.0 RC2 for SQL Server vNext CTP 1.1 Analysis Services. Remember that this is an early preview. Only install the Analysis Services, but not the Reporting Services and Integration Services components. Don’t use this version in a production environment. Install fresh. Don’t attempt to upgrade from previous SSDT versions. Only work with Tabular 1400 models using this preview version. For Multidimensional as well as Tabular 1100, 1103, and 1200 models, use SSDT version 16.5.
  2. Modify the Model.bim file from your Tabular 1400 project by using the Tabular Object Model (TOM). Apply your changes programmatically and then serialize the changes back into the Model.bim file.
  3. Process the model in the preview version of SSDT Tabular. Just keep in-mind that SSDT Tabular doesn’t know yet how to deal with shared expressions, so don’t attempt to modify the source query of a table or partition that relies on a shared expression as SSDT Tabular may become unresponsive.

Let’s go through these steps in greater detail by converting the source query of a presumably large table into a shared query, and then defining multiple partitions based on this shared query. As an optional step, afterwards you can modify the shared query and evaluate the effects of the changes across all partitions. For your reference, download the Shared Expression Code Sample.

Step 1) Create a Tabular 1400 model

If you want to follow the explanations on your own workstation, create a new Tabular 1400 model as explained in Introducing a Modern Get Data Experience for SQL Server vNext on Windows CTP 1.1 for Analysis Services. Connect to an instance of the AdventureWorksDW database, and import among others the FactInternetSales table. A simple source query suffices, as in the following screenshot.

factinternetsalessourcequery

Step 2) Modify the Model.bim file by using TOM

As you’re going to modify the Model.bim file of a Tabular project outside of SSDT, make sure you close the Tabular project at this point. Then start Visual Studio, create a new Console Application project, and add references to the TOM libraries as explained under “Working with Tabular 1400 models programmatically” in Introducing a Modern Get Data Experience for SQL Server vNext on Windows CTP 1.1 for Analysis Services.

The first task is to deserialize the Model.bim file into an offline database object. The following code snippet gets this done (you might have to update the bimFilePath variable). Of course, you can have a more elaborate implementation using OpenFileDialog and error handling, but that’s not the focus of this article.

string bimFilePath = @”C:\Users\Administrator\Documents\Visual Studio 2015\Projects\TabularProject1\TabularProject1\Model.bim”;
var tabularDB = TOM.JsonSerializer.DeserializeDatabase(File.ReadAllText(bimFilePath));

The next task is to add a shared expression to the model, as the following code snippet demonstrates. Again, this is a bare-bones minimum implementation. The code will fail if an expression named SharedQuery already exists. You could check for its existence by using: if(tabularDB.Model.Expressions.Contains(“SharedQuery”)) and skip the creation if it does.

tabularDB.Model.Expressions.Add(new TOM.NamedExpression()
{
    Kind = TOM.ExpressionKind.M,
    Name = “SharedQuery”,
    Description = “A shared query for the FactInternetSales Table”,
    Expression = “let”
        +      Source = AS_AdventureWorksDW,”
        +      dbo_FactInternetSales = Source{[Schema=\”dbo\”,Item=\”FactInternetSales\”]}[Data]”
        +  “in”
        +      dbo_FactInternetSales”,
});

Perhaps the most involved task is to remove the existing partition from the target (FactInternetSales) table and create the desired number of new partitions based on the shared expression. The following code sample creates 10 partitions and uses the Table.Range function to split the shared expression into chunks of up to 10,000 rows. This is a simple way to slice the source data. Typically, you would partition based on the values from a date column or other criteria.

tabularDB.Model.Tables[“FactInternetSales”].Partitions.Clear();
for(int i = 0; i < 10; i++)
{
    tabularDB.Model.Tables[“FactInternetSales”].Partitions.Add(new TOM.Partition()
    {
        Name = string.Format(“FactInternetSalesP{0}”, i),
        Source = new TOM.MPartitionSource()
        {
            Expression = string.Format(“Table.Range(SharedQuery,{0},{1})”, i*10000, 10000),
        }
    });
}

The final step is to serialize the resulting Tabular database object with all the modifications back into the Model.bim file, as the following line of code demonstrates.

File.WriteAllText(bimFilePath, TOM.JsonSerializer.SerializeDatabase(tabularDB));

Step 3) Process the modified model in SSDT Tabular

Having serialized the changes back into the Model.bim file, you can open the Tabular project again in SSDT. In Tabular Model Explorer, expand Tables, FactInternetSales, and Partitions, and verify that 10 partitions exist, as illustrated in the following screenshot. Verify that SSDT can process the table by opening the Model menu, pointing to Process, and then clicking Process Table.

processtable

You can also verify the query expression for each partition in Partition Manager. Just remember, however, that you must click the Cancel button to close the Partition Manager window. Do not click OK –   with the December 2016 preview release, SSDT could become unresponsive.

Wrapping Things Up

Congratulations! Your FactInternetSales now effectively uses a centralized source query shared across all partitions. You can now modify the source query without having to update each individual partition. For example, you might decide to remove the ‘SO’ part from the values in the SalesOrderNumber column to get the order number in numeric form. The following screenshot shows the modified source query in the Advanced Editor window.

modifiedquery

Of course, you cannot edit the shared query in SSDT yet. But you could import the FactInternetSales table a second time and then edit the source query on that table. When you achieve the desired result, copy the M script into your TOM application to modify the shared expression accordingly. The following lines of code correspond to the screenshot above.

tabularDB.Model.Expressions[“SharedQuery”].Expression = “let”
    +     Source = AS_AdventureWorksDW,”
    +     dbo_FactInternetSales = Source{[Schema=\”dbo\”,Item=\”FactInternetSales\”]}[Data],”
    +     #\”Split Column by Position\” = Table.SplitColumn(dbo_FactInternetSales,\”SalesOrderNumber\”,Splitter.SplitTextByPositions({0, 2}, false),{\”SalesOrderNumber.1\”, \”SalesOrderNumber\”}),”
    +     #\”Changed Type\” = Table.TransformColumnTypes(#\”Split Column by Position\”,{{\”SalesOrderNumber.1\”, type text}, {\”SalesOrderNumber\”, Int64.Type}}),”
    +     #\”Removed Columns\” = Table.RemoveColumns(#\”Changed Type\”,{\”SalesOrderNumber.1\”})”
    + “in”
    +     #\”Removed Columns\””;

One final note of caution: If you remove columns in your shared expression that already exist on the table, make sure you also remove these columns from the table’s Columns collection to bring the table back into a consistent state.

That’s about it on shared expressions for now. Hopefully in the not-so-distant future, you’ll be able to create shared parameters, functions, and queries directly in SSDT Tabular. Stay tuned for more updates on the modern Get Data experience. And, as always, please send us your feedback via the SSASPrev email alias here at Microsoft.com or use any other available communication channels such as UserVoice or MSDN forums. You can influence the evolution of the Analysis Services connectivity stack to the benefit of all our customers.

ASTrace on the Analysis Services Git Repo

$
0
0

The ASTrace utility provides the ability to capture an Analysis Services trace and log it into a SQL Server table. The table can be queried later or read using SQL Server Profiler. The ASTrace utility runs as a Windows service that connects to Analysis Services, then creates a trace, and logs trace events into a SQL Server table using the SQL Server Profiler format. The ASTrace utility creates the trace using a standard trace template that you can author using SQL Server Profiler.

ASTrace is available in the Analysis Services Git repo.

Thanks to Karan Gulati and Greg Galloway (Artis Consulting).

Whitepaper and Code Sample for Automated Partition Management

$
0
0

Analysis Services tabular models can store data in a highly-compressed, in-memory cache for optimized query performance. This provides fast user interactivity over large data sets.

Large data sets normally require table partitioning to accelerate and optimize the data-load process. Partitioning enables incremental loads, increases parallelization, and reduces memory consumption. The Tabular Object Model (TOM) serves as an API to create and manage partitions. TOM was released with SQL Server 2016 and is discussed here. Model Compatibility Level 1200 is required.

The Automated Partition Management for Analysis Services Tabular Models whitepaper is available here. It describes how to use the AsPartitionProcessing TOM code sample with minimal code changes.

The sample,

  • Is intended to be generic and configuration driven.
  • Works for both Azure Analysis Services and SQL Server Analysis Services tabular models.
  • Can be leveraged in many ways including from an SSIS script task, Azure Functions and others.

Thanks to Marco Russo (SQLBI) and Bill Anton (Opifex Solutions) for their contributions to the whitepaper and code sample.

Released: SQL Server Data Tools 17.0 RC 2

$
0
0

SQL Server Data Tools 17.0 Release Candidate 2 (RC 2) has just been published. You can download and install from here: https://go.microsoft.com/fwlink/?linkid=837939.

If you’re evaluating new enhancements in Analysis Services Tabular 1400 models, be sure to download this latest version because it includes several important fixes; particularly with the modern Get Data experience.

Most noteworthy is the addition of a menu bar to the Query Editor, as shown in the following screenshot. The purpose of this menu is to provide quick and easy access to the same functions that Microsoft Excel and Power BI Desktop provide through the Query Editor ribbon.

Menu Navigation

Feedback received through email via the SSASPrev alias made it clear the Query Editor toolbar alone was not intuitive enough. See also the conversation in response to the Introducing a Modern Get Data Experience for SQL Server vNext on Windows CTP 1.1 for Analysis Services article. The ideal solution would be a ribbon in SSDT Tabular that mirrors the ribbon in Power BI Desktop. That way, there would be no friction switching back and forth between Power BI Desktop and SSDT Tabular. Unfortunately, however, the Visual Studio shell does not provide a ribbon infrastructure, requiring us to take a different approach.

While the Query Editor menu bar isn’t a ribbon, it can still be a very useful user interface element. In fact, you might find the menu arranges available commands in a clear and logical order and helps you conveniently discover performable actions. If you want to work with commands that act on a query, look at the Query menu. If you want to remove rows or keep a range of rows, the Rows menu has you covered. Want to add or remove columns? You get the idea.

Moreover, you can work with keyboard shortcuts! Want to keep the top 10 rows in a table? Press Alt+R, then K, enter 10 in the Keep Top Rows dialog box, and then press Enter. Want to remove a selected column? Press Alt+C, then R, and the job is done. Want to display the Advanced Editor? Alt+ V, E. And simply press the Alt key to discover all the available shortcut combinations. In the following screenshot, you can see the sequence to parse the time values in a column would be Alt+T, M, T, and then P. This may not be the most convenient sequence, but it comes in handy if you find yourself performing a specific action very frequently.

Query Editor

Next on our list is to implement support for shared queries, functions, and parameters, and then to enable as many data sources as possible for close parity with Power BI Desktop. So, stay tuned for the forthcoming releases in subsequent months and keep sending us your suggestions and report any issues to SSASPrev here at Microsoft.com. Or, use any other available communication channels such as UserVoice or MSDN forums. You can influence the evolution of the Analysis Services connectivity stack to the benefit of all our customers.

Encoding Hints and SQL Server Analysis Services vNext CTP 1.3

$
0
0

The public CTP 1.3 of SQL Server vNext on Windows is available here! The corresponding versions of SQL Server Data Tools (SSDT) and SQL Server Management Studio (SSMS) will be released in the coming weeks. They include much-anticipated new features, so watch out for the upcoming announcements!

Encoding hints

CTP 1.3 introduces encoding hints, which is an advanced feature used to optimize processing (data refresh) of large in-memory tabular models. Please refer to the Performance Tuning of Tabular Models in SQL Server 2012 Analysis Services whitepaper to better understand encoding. The encoding process described still applies in CTP 1.3.

  • Value encoding provides better query performance for columns that are typically only used for aggregations.
  • Hash encoding is preferred for group-by columns (often dimension-table values) and foreign keys. String columns are always hash encoded.

Numeric columns can use either of these encoding methods. When Analysis Services starts processing a table, if either the table is empty (with or without partitions) or a full-table processing operation is being performed, samples values are taken for each numeric column to determine whether to apply value or hash encoding. By default, value encoding is chosen when the sample of distinct values in the column is large enough – otherwise hash encoding will usually provide better compression. It is possible for Analysis Services to change the encoding method after the column is partially processed based on further information about the data distribution, and restart the encoding process. This of course increases processing time and is inefficient. The performance-tuning whitepaper discusses re-encoding in more detail and describes how to detect it using SQL Server Profiler.

Encoding hints in CTP 1.3 allow the modeler to specify a preference for the encoding method given prior knowledge from data profiling and/or in response to re-encoding trace events. Since aggregation over hash-encoded columns is slower than over value-encoded columns, value encoding may be specified as a hint for such columns. It is not guaranteed that the preference will be applied; hence it is a hint as opposed to a setting. To specify an encoding hint, set the EncodingHint property on the column. Possible values are “Default”, “Value” and “Hash”. At time of writing, the property is not yet exposed in SSDT, so must be set using the JSON-based metadata, Tabular Model Scripting Language (TMSL), or Tabular Object Model (TOM). The following snippet of JSON-based metadata from the Model.bim file specifies value encoding for the Sales Amount column.

  {
    "name": "Sales Amount",
    "dataType": "decimal",
    "sourceColumn": "SalesAmount",
    "formatString": "\\$#,0.00;(\\$#,0.00);\\$#,0.00",
    "sourceProviderType": "Currency",
    "encodingHint": "Value"
  }

Extended events not working in CTP 1.3

SSAS extended events do not work in CTP 1.3. We plan to fix them for the next CTP.

Download now!

To get started, download SQL Server vNext on Windows CTP1.3 from here. Be sure to keep an eye on this blog to stay up to date on Analysis Services.


Introducing SQL Server Data Tools for Analysis and Reporting Services for Visual Studio 2017

$
0
0

Visual Studio 2017 brings improvements to performance, navigation, IntelliSense, Azure development, mobile development, and boosts productivity through live unit testing and real-time architectural dependency validation. There are many good reasons for developers to use Visual Studio 2017. And thanks to the timely availability of SSDT AS and RS for Visual Studio 2017, BI Pros do not have to wait or run multiple versions of Visual Studio side by side.

The release version of Visual Studio 2017 is available for download at https://visualstudio.com

The full SQL Server Data Tools (SSDT) for Visual Studio 2017 stand-alone download is not yet available. This is still a work in progress and should be available in the near future. The good news is that the installation packages for the preview versions of SQL Server Analysis Services and SQL Server Reporting Services project types are already available as Visual Studio Deployment (VSIX) packages. VSIX packages provide a straightforward way to deploy extensions and they open new deployment options. For example, you no longer need to search for a separate download if you already have Visual Studio. From any edition of Visual Studio 2017 – including Community – just check out the Visual Studio Marketplace for convenient access to the AS and RS project types. Select the Tools > Extensions and Updates menu option, and search for “SSDT”. The two new BI VSIX packages should be displayed, as the following screenshot illustrates.

ssdt-2017

The Visual Studio Marketplace can also keep the AS and RS project types updated automatically, which comes in very handy if you want to stay on the latest and hottest with every new release. A little notification in Visual Studio reminds you when we make new updates available. You can configure the update settings through the Extensions and Updates dialog box in Visual Studio.

Support for Integration Services for Visual Studio 2017 is in progress, but is not yet available. For now, we recommend using SSDT for Visual Studio 2015 if you need to use all the BI project types together in a solution.

Of course, we are also going to continue to provide a unified SSDT setup experience, which, as mentioned, will be available in a forthcoming release. But don’t delay! Download and install the SSDT AS and RS packages through the Visual Studio Marketplace and let us know how you like your new Visual Studio 2017 development environment!

Please provide feedback to the Microsoft Engineering team: ProBIToolsFeedback at microsoft.com.

SSMS DAX Query Editor

$
0
0

We are excited to announce the SQL Server Management Studio DAX Query Editor! Have you ever authored a DAX query in SSMS using the MDX editor? With the new DAX Query Editor, you no longer need to do. Download RC3 for vNext from the SSMS release candidate download page.

ssms-dax-animated

To try it out, click on the new DAX Query toolbar button, or right-click > New Query in Object Explorer.

dax-query-toolbar-button

IntelliSense works for DAX functions and model objects. Members listed for selection are type aware. For example, after an EVALUATE statement, the DAX Query Editor expects a table type, so lists DAX table-valued functions, and tables in the model.

intellisense-functions

Once a function is selected, parameter information is provided.

parameter-info

In the following example, measures are offered for selection instead of DAX functions and tables based on the position of the parameter.

type-aware-parameter-info

And of course, syntax highlighting works too.

syntax-highlighting

We hope you agree these features make DAX query authoring in SSMS more productive. For example, the type-aware IntelliSense makes it easier to find what you’re looking for.

This is the first release of the SSMS DAX Query Editor; it is not yet in GA status. We are still adding enhancements. For example, DEFINE MEASURE syntax recognition, and parenthesis-match highlighting are planned for the next release. If you have suggestions for enhancements, or general feedback, please use ProBIToolsFeedback at microsoft.com.

What makes a Data Source a Data Source?

$
0
0

It should be obvious, and it is — at least at the Tabular 1200 compatibility level: A data source definition in a Tabular model holds the connection information for Analysis Services to connect to a source of data such as a database, an OData feed, or a file. That’s straightforward. However, at the Tabular 1400 compatibility level, this is no longer so trivial. At the Tabular 1400 compatibility level, a data source definition can include a native query and even a contextual M expression on top of the connection information, which opens interesting capabilities that didn’t exist previously and redefines to some degree the nature of a data source definition.

Let’s take a closer look at a data source definition in a Tabular 1400 model, such as the following definition for a SQL Server-based data source:
Data Source with default contextExpression
The two important properties are the query parameter in the connectionDetails, which can hold a native source query, and the contextExpression parameter, which can take an M expression. The default “…” simply stands for an expression that takes the data source definition as is without wrapping it into a further M context. You can find a more elaborate example at the end of this article. For now, just note that you won’t see the contextExpression in your data source definitions yet. A forthcoming release of SSAS and SSDT Tabular will enable this feature.

The query parameter, on the other hand, already exists in the metadata. It’s just that SSDT Tabular does not let you enter a source query through the user interface (UI) when defining a data source. This is intentional to maintain the familiar separation of connection information on data sources and source queries on table partitions. Equally, there are currently no plans to expose a contextExpression designer in the UI.

The following screenshot shows the Power BI Desktop UI in the background for a SQL Server data source with a textbox to enter a SQL query in comparison to SSDT Tabular in the foreground, which doesn’t offer this textbox.

Power BI Desktop UI vs SSDT UI

For most data modelling scenarios, a clear separation of connection information and source queries is advantageous. After all, multiple tables and partitions can refer to a single data source definition in SSDT. It doesn’t seem very useful to restrict a data source to a single result set by means of a source query, such as “SELECT * FROM dimCustomer”, defined through the data source’s query parameter. Instead, it would be more useful to specify the query when importing a table by using the Value.NativeQuery function, as the following screenshot illustrates.

Using Value.NativeQuery to specify a native source query for a table.

This way, the data source remains available for importing further tables from the same source. On the other hand, if you do need a data source with a very narrow scope, you can set the query parameter manually by using the Tabular Model Scripting Language (TMSL).

If it’s clearly not recommended to use the query parameter in a data source definition, then why did we come up with yet another such parameter called contextExpression? Well, this brings us back to the starting point: What makes a Data Source a Data Source?

broadornarrow

A data source can be defined along a varying degree of detail, as shown above. On one extreme, you could define a data source that is so narrow it returns a single character, such as by using the following source query: “SELECT TOP 1 Left(firstName, 1) FROM dimCustomer”. Not very useful, but still a source of data. On the other extreme, a data source could be so broad that the tables you import on top of it require redundant statements that could be avoided with a more precise data source definition. For example, by using Tabular Object Model (TOM) or TMSL, you could define a SQL Server data source that only specifies the server name but no database name. Any tables importing from this data source would now require an M expression that includes a line to navigate to the desired database first before importing a source table, such as “AdventureWorksDW = Source{[Name=”AdventureWorksDW”]}[Data]”. Perhaps even more extreme, some data sources can be defined so broadly that they don’t even include information about the data source type. For example, any file-based data source can be considered of type File, while in fact a better definition would be a Microsoft Access database, Microsoft Excel workbook, comma-separated values file, and so forth. This is where the contextExpression comes in. It adds context information to narrow down a very broad data source definition to make it more meaningful.

The following abbreviated data source definition for an Access database shows the contextExpression in action. The connectionDetails merely define a File data source, which is too broad. What we want to define is an Access data source, so the contextExpression takes the File data source and wraps it into an Access.Database() function. As mentioned earlier, the placeholder expression “…” stands for the data source definition without the additional context.

contextExpression for Access.Database

By using a context expression, SSDT Tabular can define data sources that build on other data sources. Through TOM or TMSL, you can also edit the context expression to build more sophisticated definitions, yet this is generally not recommended. Also, unfortunately, TOM and TMSL do not provide an API for editing an M expression. This may come at some point in the future, but for now it’s not a priority.

And this is it for a quick glance at the upcoming contextExpression feature. As always, please send us your feedback and suggestions by using ProBIToolsFeedback or SSASPrev at Microsoft.com. Or use any other available communication channels such as UserVoice or MSDN forums. You can influence the evolution of the Analysis Services connectivity stack to the benefit of all our customers.

Online Analysis Services Course: Developing a Tabular Model

$
0
0

Check out the excellent, new online course by Peter Myers and Chris Randall for Microsoft Learning Experiences (LeX). Lean how to develop tabular data models with SQL Server 2016 Analysis Services. The complete course is available on edX at no cost to audit, or you can highlight your new knowledge and skills with a Verified Certificate for a small charge. Enrolment is available at edX.

Learn about Azure Analysis Services at the Microsoft Data Insights Summit 2017

$
0
0

We’re excited to participate in the Microsoft Data Insights Summit June 12 – 13, 2017 in Seattle, WA. This two-day event is designed to help you identify deeper insights, make better sense of your data, and take action to transform your business.

This year’s Microsoft Data Insights Summit will be filled with strong technical content, vibrant speakers, and an engaged community of experts. The event offers deep dive sessions, hands-on learning, industry insights, and direct access to experts. Join us to expand your skills, connect directly with Microsoft product development teams, and learn how to get the most from the Microsoft BI stack.

The Analysis Services program-management team is excited to deliver the following sessions.

Super Charge Power BI with Azure Analysis Services

Monday, June 12. 11:10 am – 12:00 pm.

Join this session to get a deep dive to how you can scale up a Power BI model by migrating it to Azure Analysis Services. This new service enables larger models and allows fine grained control of refresh behavior. We will cover migration, using the gateway for on-premises data, and new connectivity with Power Query and the M Engine for Power BI compatibility and reuse. Other topics will include creating reports that tell stories, distributing in SPO or PtW, collaborative conversations across teams, data story galleries, custom visuals, Sway, and more.

Creating Enterprise Grade BI Models with Azure Analysis Services

Tuesday, June 13. 11:40 am – 12:30 pm.

Microsoft Azure Analysis Services and SQL Server Analysis Services enable you to build comprehensive, enterprise-scale analytic solutions that deliver actionable insights through familiar data visualization tools such as Microsoft Power BI and Microsoft Excel. Analysis Services enables consistent data across reports and users of Power BI. The demos will cover new features such as improved Power BI Desktop feature integration, Power Query connectivity, and techniques for modeling and data loading which enable the best reporting experiences. Various modeling enhancements will be included such as Detail Rows allowing users to easily see transactional records, and improved support for ragged hierarchies.

Check out the sessions page for the complete list of sessions. Don’t miss out—register today!

Viewing all 69 articles
Browse latest View live


Latest Images

<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>