-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathindex.json
277 lines (277 loc) · 384 KB
/
index.json
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
[{
"title": "Sample Archive",
"date": "",
"description": "",
"body": "",
"ref": "/archives/"
},{
"title": "Sample Archive",
"date": "",
"description": "",
"body": "",
"ref": "/search/"
},{
"title": "Terraform CIDR Functions",
"date": "",
"description": "",
"body": "Recently, when I was using Terraform to provision virtual network structure, I had to find a solution to create 4 predefined subnets within a given network range. Initially - as PoC - I used string manipulation to obtain the ranges, but I haven\u0026rsquo;t liked it. Then I found two built-in terraform functions, that are a great help when working with network aspect: cidrsubnet and cidrsubnets.\nCIDR notation 101\rWikipedia states, that\nClassless Inter-Domain Routing (CIDR /ˈsaɪdər, ˈsɪ-/) is a method for allocating IP addresses and for IP routing\nand\nCIDR notation is a compact representation of an IP address and its associated network mask.\nAn example would be 10.0.0.0/16. This is all nice, but I struggle to remember the CIDR notation stuff, so I always have a short cheatsheet for it:\n/16 == 65 536 addresses, ex. 10.0.0.0 - 10.0.255.255 /17 == 32 768 addresses /18 == 16 384 addresses ... /23 == 512 addresses /24 == 256 addresses, ex. 10.0.0.0 - 10.0.0.255 /25 == 128 addresses /26 == 64 addresses /27 == 32 addresses ... /32 == 1 address I work with Azure and I usually see or use /16, /23, /24, /26, and /27 networks and subnets. And when I divide a VNet into subnets, I have them of the same size.\nAn example: /24 address space (256 addresses) divided into four equal /26 ranges, 64 addresses each:\nvnet: 10.0.0.0/24 -\u0026gt; 256 addresses, 10.0.0.0 - 10.0.0.255 subnet0: 10.0.0.0/26 -\u0026gt; 64 addresses, 10.0.0.0 - 10.0.0.63 subnet1: 10.0.0.64/26 -\u0026gt; 64 addresses, 10.0.0.64 - 10.0.0.127 subnet2: 10.0.0.128/26 -\u0026gt; 64 addresses, 10.0.0.128 - 10.0.0.191 subnet3: 10.0.0.192/26 -\u0026gt; 64 addresses, 10.0.0.192 - 10.0.0.255 CIDR notation handling in Terraform\rTo avoid mundane string manipulation, and to ease working with CIDR notation Terraform has four functions: cidrhost, cidrnetmask, cidrsubnet, and cidrsubnets. For my purpose, I will use the last two:\ncidrsubnet(prefix, newbits, netnum) - calculates subnet at netnum position for a given prefix cidrsubnets(prefix, newbits...) - calculates subsequent subnets for a given prefix\nLet\u0026rsquo;s stick to the /24 -\u0026gt; 4 x /26 example above. To split the /24 VNet into four subnets I can write four cidrsubnet functions:\nD:\\temp\\_terraform\u0026gt; terraform console \u0026gt; cidrsubnet(\u0026#34;10.244.200.0/24\u0026#34;, 2, 0) \u0026#34;10.244.200.0/26\u0026#34; \u0026gt; cidrsubnet(\u0026#34;10.244.200.0/24\u0026#34;, 2, 1) \u0026#34;10.244.200.64/26\u0026#34; \u0026gt; cidrsubnet(\u0026#34;10.244.200.0/24\u0026#34;, 2, 2) \u0026#34;10.244.200.128/26\u0026#34; \u0026gt; cidrsubnet(\u0026#34;10.244.200.0/24\u0026#34;, 2, 3) \u0026#34;10.244.200.192/26\u0026#34; \u0026gt; cidrsubnet(\u0026#34;10.244.200.0/24\u0026#34;, 2, 4) ╷ │ Error: Error in function call │ │ on \u0026lt;console-input\u0026gt; line 1: │ (source code not available) │ │ Call to function \u0026#34;cidrsubnet\u0026#34; failed: prefix extension of 2 does not accommodate a subnet numbered 4. I\u0026rsquo;ll explain the above example step by step.\nThe definition of cidrsubnet is cidrsubnet(prefix, newbits, netnum). prefix is the VNet address space, specified in the CIDR format. In my case: 10.244.200.0/24. I want /26 subnets, so as newbits I pass the value 2, which means dear Terraform, please add 2 to the /24 space, to obtain a few /26 address ranges. In the background, Terraform splits the /24 range into four equal /26 subranges, and enumerates them starting from 0. I imagine this as Terraform keeping internally an array of four subnets, like in the example above. If I want the first range, I set netnum as 0, if I want the third, I set netnum as 2, etc. And since the /24 range is split equally into four /26 ranges, I can\u0026rsquo;t assign newnum == 4, as the range does not exist.\nI can either write four separate cidrsubnet() expressions to get four subnets, or I can ask Terraform to do it for me in one command using cidrsubnets(). Again as a reminder:\ncidrsubnets(prefix, newbits...)\nD:\\temp\\_terraform\u0026gt; terraform console \u0026gt; cidrsubnets(\u0026#34;10.244.200.0/24\u0026#34;, 2, 2, 2, 2) tolist([ \u0026#34;10.244.200.0/26\u0026#34;, \u0026#34;10.244.200.64/26\u0026#34;, \u0026#34;10.244.200.128/26\u0026#34;, \u0026#34;10.244.200.192/26\u0026#34;, ]) The above means: dear Terraform, take this /24 space, then split it for me into ranges; first take 64 addresses, then again take 64 addresses, then again take 64 addresses, then again take 64 addresses. Terraform knows I want 64 addresses, as he creates subsequent ranges by adding the newbits value to the given /24 space. This way, he obtains /26 range, that contains 64 addresses. Each newbits value adds a range to the previously created ones.\nIt\u0026rsquo;s still unclear, so let\u0026rsquo;s go again with an example:\ncidrsubnets(\u0026#34;10.244.200.0/24\u0026#34;, 2, 2, 2, 2) | | | | | /24 address space -+ | | | | /26 address space (/24 + 2), 64 addresses -+ | | | 10.244.200.0 .. 10.244.200.63 /26 address space (/24 + 2), 64 addresses ----+ | | 10.244.200.64 .. 10.244.200.127 /26 address space (/24 + 2), 64 addresses -------+ | 10.244.200.128 .. 10.244.200.191 /26 address space (/24 + 2), 64 addresses ----------+ 10.244.200.192 .. 10.244.200.255 Using these techniques, I can split my address range in multiple ways, and I do not have to stick to the one newbits value. Both examples below are equivalent:\nD:\\temp\\_terraform\u0026gt; terraform console \u0026gt; cidrsubnets(\u0026#34;10.244.200.0/24\u0026#34;, 2, 2, 3, 3, 2) tolist([ \u0026#34;10.244.200.0/26\u0026#34;, \u0026#34;10.244.200.64/26\u0026#34;, \u0026#34;10.244.200.128/27\u0026#34;, \u0026#34;10.244.200.160/27\u0026#34;, \u0026#34;10.244.200.192/26\u0026#34;, ]) \u0026gt; cidrsubnet(\u0026#34;10.244.200.0/24\u0026#34;, 2, 0) \u0026#34;10.244.200.0/26\u0026#34; \u0026gt; cidrsubnet(\u0026#34;10.244.200.0/24\u0026#34;, 2, 1) \u0026#34;10.244.200.64/26\u0026#34; \u0026gt; cidrsubnet(\u0026#34;10.244.200.0/24\u0026#34;, 3, 4) \u0026#34;10.244.200.128/27\u0026#34; \u0026gt; cidrsubnet(\u0026#34;10.244.200.0/24\u0026#34;, 3, 5) \u0026#34;10.244.200.160/27\u0026#34; \u0026gt; cidrsubnet(\u0026#34;10.244.200.0/24\u0026#34;, 2, 3) \u0026#34;10.244.200.192/26\u0026#34; One thing to remember when using cidrsubnet() - it splits the initial range into an equal number of subranges. So in the example above I want /26 and /27 subnets, so Terraform creates these arrays in the background:\ntolist([ \u0026#34;10.244.200.0/26\u0026#34;, \u0026#34;10.244.200.64/26\u0026#34;, \u0026#34;10.244.200.128/26\u0026#34;, \u0026#34;10.244.200.192/26\u0026#34;, ]) tolist([ \u0026#34;10.244.200.0/27\u0026#34;, \u0026#34;10.244.200.32/27\u0026#34;, \u0026#34;10.244.200.64/27\u0026#34;, \u0026#34;10.244.200.96/27\u0026#34;, \u0026#34;10.244.200.128/27\u0026#34;, \u0026#34;10.244.200.160/27\u0026#34;, \u0026#34;10.244.200.192/27\u0026#34;, \u0026#34;10.244.200.224/27\u0026#34;, ]) As the first /26 range encompasses the first two /27 ranges, and the second /26 range encompasses the third and fourth /27 ranges, I need to use netnum == 4 (fifth range) for the third subnet.\nFor completness the cidrsubnets(\u0026quot;10.244.200.0/24\u0026quot;, 2, 2, 3, 3, 2) diagram\ncidrsubnets(\u0026#34;10.244.200.0/24\u0026#34;, 2, 2, 3, 3, 2) | | | | | | /24 address space -+ | | | | | /26 address space (/24 + 2), 64 addresses -+ | | | | 10.244.200.0 .. 10.244.200.63 /26 address space (/24 + 2), 64 addresses ----+ | | | 10.244.200.64 .. 10.244.200.127 /27 address space (/24 + 3), 32 addresses -------+ | | 10.244.200.128 .. 10.244.200.159 /27 address space (/24 + 3), 32 addresses ----------+ | 10.244.200.160 .. 10.244.200.191 /26 address space (/24 + 2), 64 addresses -------------+ 10.244.200.192 .. 10.244.200.255 Assign ranges to subnets\rOnce I know how to use cidrsubnet() and cidrsubnets() functions, I can assing ranges to the subnets. Let\u0026rsquo;s say, I have this kind of network segmentation:\nsubnetA range 10.x.y.0 .. 10.x.y.63 subnetB range 10.x.y.64 .. 10.x.y.127 subnetC range 10.x.y.128 .. 10.x.y.191 subnetD range 10.x.y.192 .. 10.x.y.255 With this terraform code I can create subnets for a given virtual network:\nprovider \u0026#34;azurerm\u0026#34; { features {} } locals { vnet1_address_space = \u0026#34;10.244.200.0/24\u0026#34; vnet2_address_space = \u0026#34;10.244.204.0/24\u0026#34; subnets = { \u0026#34;subnetA\u0026#34; = 0 \u0026#34;subnetB\u0026#34; = 1 \u0026#34;subnetC\u0026#34; = 2 \u0026#34;subnetD\u0026#34; = 3 } # for cidrsubnets() approach subnet_ranges = cidrsubnets(local.vnet2_address_space, 2, 2, 2, 2) } resource \u0026#34;azurerm_resource_group\u0026#34; \u0026#34;rg\u0026#34; { name = \u0026#34;rg-shared-networking\u0026#34; location = \u0026#34;West Europe\u0026#34; } # using cidrsubnet() resource \u0026#34;azurerm_virtual_network\u0026#34; \u0026#34;vnet1\u0026#34; { name = \u0026#34;vnet-shared1\u0026#34; address_space = [local.vnet1_address_space] location = azurerm_resource_group.rg.location resource_group_name = azurerm_resource_group.rg.name } resource \u0026#34;azurerm_subnet\u0026#34; \u0026#34;snet1\u0026#34; { resource_group_name = azurerm_resource_group.rg.name virtual_network_name = azurerm_virtual_network.vnet1.name for_each = local.subnets name = \u0026#34;snet-${each.key}\u0026#34; address_prefixes = [cidrsubnet(local.vnet1_address_space, 2, each.value)] } # using cidrsubnets() resource \u0026#34;azurerm_virtual_network\u0026#34; \u0026#34;vnet2\u0026#34; { name = \u0026#34;vnet-shared2\u0026#34; address_space = [local.vnet2_address_space] location = azurerm_resource_group.rg.location resource_group_name = azurerm_resource_group.rg.name } resource \u0026#34;azurerm_subnet\u0026#34; \u0026#34;snet2\u0026#34; { resource_group_name = azurerm_resource_group.rg.name virtual_network_name = azurerm_virtual_network.vnet2.name for_each = local.subnets name = \u0026#34;snet-${each.key}\u0026#34; address_prefixes = [local.subnet_ranges[each.value]] } When run with terraform plan, the code above produces this output:\nTerraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: # azurerm_resource_group.rg will be created + resource \u0026#34;azurerm_resource_group\u0026#34; \u0026#34;rg\u0026#34; { + id = (known after apply) + location = \u0026#34;westeurope\u0026#34; + name = \u0026#34;rg-shared-networking\u0026#34; } # azurerm_subnet.snet1[\u0026#34;subnetA\u0026#34;] will be created + resource \u0026#34;azurerm_subnet\u0026#34; \u0026#34;snet1\u0026#34; { + address_prefixes = [ + \u0026#34;10.244.200.0/26\u0026#34;, ] + enforce_private_link_endpoint_network_policies = (known after apply) + enforce_private_link_service_network_policies = (known after apply) + id = (known after apply) + name = \u0026#34;snet-subnetA\u0026#34; + private_endpoint_network_policies_enabled = (known after apply) + private_link_service_network_policies_enabled = (known after apply) + resource_group_name = \u0026#34;rg-shared-networking\u0026#34; + virtual_network_name = \u0026#34;vnet-shared1\u0026#34; } # azurerm_subnet.snet1[\u0026#34;subnetB\u0026#34;] will be created + resource \u0026#34;azurerm_subnet\u0026#34; \u0026#34;snet1\u0026#34; { + address_prefixes = [ + \u0026#34;10.244.200.64/26\u0026#34;, ] + enforce_private_link_endpoint_network_policies = (known after apply) + enforce_private_link_service_network_policies = (known after apply) + id = (known after apply) + name = \u0026#34;snet-subnetB\u0026#34; + private_endpoint_network_policies_enabled = (known after apply) + private_link_service_network_policies_enabled = (known after apply) + resource_group_name = \u0026#34;rg-shared-networking\u0026#34; + virtual_network_name = \u0026#34;vnet-shared1\u0026#34; } # azurerm_subnet.snet1[\u0026#34;subnetC\u0026#34;] will be created + resource \u0026#34;azurerm_subnet\u0026#34; \u0026#34;snet1\u0026#34; { + address_prefixes = [ + \u0026#34;10.244.200.128/26\u0026#34;, ] + enforce_private_link_endpoint_network_policies = (known after apply) + enforce_private_link_service_network_policies = (known after apply) + id = (known after apply) + name = \u0026#34;snet-subnetC\u0026#34; + private_endpoint_network_policies_enabled = (known after apply) + private_link_service_network_policies_enabled = (known after apply) + resource_group_name = \u0026#34;rg-shared-networking\u0026#34; + virtual_network_name = \u0026#34;vnet-shared1\u0026#34; } # azurerm_subnet.snet1[\u0026#34;subnetD\u0026#34;] will be created + resource \u0026#34;azurerm_subnet\u0026#34; \u0026#34;snet1\u0026#34; { + address_prefixes = [ + \u0026#34;10.244.200.192/26\u0026#34;, ] + enforce_private_link_endpoint_network_policies = (known after apply) + enforce_private_link_service_network_policies = (known after apply) + id = (known after apply) + name = \u0026#34;snet-subnetD\u0026#34; + private_endpoint_network_policies_enabled = (known after apply) + private_link_service_network_policies_enabled = (known after apply) + resource_group_name = \u0026#34;rg-shared-networking\u0026#34; + virtual_network_name = \u0026#34;vnet-shared1\u0026#34; } # azurerm_subnet.snet2[\u0026#34;subnetA\u0026#34;] will be created + resource \u0026#34;azurerm_subnet\u0026#34; \u0026#34;snet2\u0026#34; { + address_prefixes = [ + \u0026#34;10.244.204.0/26\u0026#34;, ] + enforce_private_link_endpoint_network_policies = (known after apply) + enforce_private_link_service_network_policies = (known after apply) + id = (known after apply) + name = \u0026#34;snet-subnetA\u0026#34; + private_endpoint_network_policies_enabled = (known after apply) + private_link_service_network_policies_enabled = (known after apply) + resource_group_name = \u0026#34;rg-shared-networking\u0026#34; + virtual_network_name = \u0026#34;vnet-shared2\u0026#34; } # azurerm_subnet.snet2[\u0026#34;subnetB\u0026#34;] will be created + resource \u0026#34;azurerm_subnet\u0026#34; \u0026#34;snet2\u0026#34; { + address_prefixes = [ + \u0026#34;10.244.204.64/26\u0026#34;, ] + enforce_private_link_endpoint_network_policies = (known after apply) + enforce_private_link_service_network_policies = (known after apply) + id = (known after apply) + name = \u0026#34;snet-subnetB\u0026#34; + private_endpoint_network_policies_enabled = (known after apply) + private_link_service_network_policies_enabled = (known after apply) + resource_group_name = \u0026#34;rg-shared-networking\u0026#34; + virtual_network_name = \u0026#34;vnet-shared2\u0026#34; } # azurerm_subnet.snet2[\u0026#34;subnetC\u0026#34;] will be created + resource \u0026#34;azurerm_subnet\u0026#34; \u0026#34;snet2\u0026#34; { + address_prefixes = [ + \u0026#34;10.244.204.128/26\u0026#34;, ] + enforce_private_link_endpoint_network_policies = (known after apply) + enforce_private_link_service_network_policies = (known after apply) + id = (known after apply) + name = \u0026#34;snet-subnetC\u0026#34; + private_endpoint_network_policies_enabled = (known after apply) + private_link_service_network_policies_enabled = (known after apply) + resource_group_name = \u0026#34;rg-shared-networking\u0026#34; + virtual_network_name = \u0026#34;vnet-shared2\u0026#34; } # azurerm_subnet.snet2[\u0026#34;subnetD\u0026#34;] will be created + resource \u0026#34;azurerm_subnet\u0026#34; \u0026#34;snet2\u0026#34; { + address_prefixes = [ + \u0026#34;10.244.204.192/26\u0026#34;, ] + enforce_private_link_endpoint_network_policies = (known after apply) + enforce_private_link_service_network_policies = (known after apply) + id = (known after apply) + name = \u0026#34;snet-subnetD\u0026#34; + private_endpoint_network_policies_enabled = (known after apply) + private_link_service_network_policies_enabled = (known after apply) + resource_group_name = \u0026#34;rg-shared-networking\u0026#34; + virtual_network_name = \u0026#34;vnet-shared2\u0026#34; } # azurerm_virtual_network.vnet1 will be created + resource \u0026#34;azurerm_virtual_network\u0026#34; \u0026#34;vnet1\u0026#34; { + address_space = [ + \u0026#34;10.244.200.0/24\u0026#34;, ] + dns_servers = (known after apply) + guid = (known after apply) + id = (known after apply) + location = \u0026#34;westeurope\u0026#34; + name = \u0026#34;vnet-shared1\u0026#34; + resource_group_name = \u0026#34;rg-shared-networking\u0026#34; + subnet = (known after apply) } # azurerm_virtual_network.vnet2 will be created + resource \u0026#34;azurerm_virtual_network\u0026#34; \u0026#34;vnet2\u0026#34; { + address_space = [ + \u0026#34;10.244.204.0/24\u0026#34;, ] + dns_servers = (known after apply) + guid = (known after apply) + id = (known after apply) + location = \u0026#34;westeurope\u0026#34; + name = \u0026#34;vnet-shared2\u0026#34; + resource_group_name = \u0026#34;rg-shared-networking\u0026#34; + subnet = (known after apply) } Plan: 11 to add, 0 to change, 0 to destroy. ",
"ref": "/2023/08/05/terraform-cidr-functions/"
},{
"title": "Build your SQL Server VM using CdkTf",
"date": "",
"description": "",
"body": "When I build infrastructure in Azure, I use terraform. I can\u0026rsquo;t force myself to use ARM templates, and I\u0026rsquo;m not yet convinced to use bicep. Terraform has flaws and problems, and HCL is not that intuitive initially, but after banging my head against the wall, we became friends.\nYet, some time ago, when I saw Pulumi on DevOpsLab - I knew I would have to try it. Using C# to create infrastructure was something I wanted to do a while ago with Farmer and F#. And while it still is something I want to try, HashiCorp announced CDKTF - Cloud Development Kit for TerraForm. Now I can still use terraform, but write code in C# instead of HCL!\nBut - as with love at first sight - the best cure is to look again. And while we are still mates with terraform, CDKTF looks like his annoying younger brother.\nOf roses and thorns\rTo try CDKTF, I set myself a goal: I want a virtual machine in Azure. With SQL Server on Windows, so I can work with SSIS. Sounds easy enough, but it takes time to prepare the code, as we need to use multiple Azure elements. The conclusions after building such VM:\nCDKTF lacks Azure documentation and samples for C# - the main focus is on AWS and TypeScript\nCDKTF prepares skeleton code that uses .NET 3.1\nevery Azure resource has its own namespace\nyou have to use names for identifiers and names inside the configuration (and variables, if required) - which looks redundant; an example:\nVirtualNetwork n = new VirtualNetwork(this, \u0026#34;vnet\u0026#34;, new VirtualNetworkConfig { Location = rg.Location, ResourceGroupName = rg.Name, AddressSpace = new []{\u0026#34;10.2.0.0/16\u0026#34;}, Name = \u0026#34;vm-ssis-vnet\u0026#34; }); terraform\u0026rsquo;s documentation is extensive but lacks additional examples\nthe disks setup for SQL Server VM does not have enough parameters for tempdb\nPreparing the environment\rCDKTF allows writing infrastructure setup using TypeScript, Python, C#, Java, and Go. I will use C# for this example, but no matter which language you choose, you need to install Node.js 16+, npm, and terraform CLI version 1.1+.\nIf you already have some Node.js and terraform CLI versions installed, you can check them using the following:\nnode --version terraform --version Upgrade software when needed (I use Windows, so I downloaded Node.js installer and terraform binary), and then - install CDKTF typing:\nnpm install --global cdktf-cli@latest # install cdktf cdktf --version # check that it\u0026#39;s installed Done. Now to\nPreparing the code\rI want to build VM with SQL Server, so I created a dedicated folder vm-ssis, and initiated the scaffolding for the project with cdktf init\nmkdir vm-ssis cd vm-ssis cdktf init --template=csharp --local After running the above, it forces me to provide the project name, description, and whether to send crash telemetry. --template is for specifying the target language to generate project structure, and --local means that terraform will store state locally.\nAs a result, there is a dotnet project containing:\nProgram.cs, MainStack.cs, MyTerraformStack.csproj, TestProgram.cs, .gitignore, cdktf.json and help files netcoreapp3.1 as the target framework version (while versions 6 and 7 are available) package references HashiCorp.Cdktf 0.13.0 Microsoft.NET.Test.Sdk 17.2.0 xunit 2.4.1 xunit.runner.visualstudio 2.4.5 Nice to see that by default there is a suggestion for writing tests, and one of the existing test frameworks is used instead of creating a separate one.\nStarting slowly\rI\u0026rsquo;m just learning how to use CDKTF and want a simple Virtual Machine. Hence I will touch on neither the modularisation topic nor splitting the code between separate files. I will also not describe the complete source code (available at https://github.com/BartekR/blog/tree/master/202210%20Cdktf) - I will focus on the most important (for me) aspects.\nThe file MainStack.cs has a placeholder that gives me a hint about where I should place the code:\nusing System; using Constructs; using HashiCorp.Cdktf; namespace MyCompany.MyApp { class MainStack : TerraformStack { public MainStack(Construct scope, string id) : base(scope, id) { // define resources here } } } I don\u0026rsquo;t care about the namespace in this case, so I keep it as it is. Before I define the resources, I need to inform my project that I want to build the infrastructure in Azure. For that, I use the following command, suggested by CDKTF setup: dotnet add package HashiCorp.Cdktf.Providers.Azurerm. As a result, it adds HashiCorp.Cdktf.Providers.Azurerm version 3.0.12 to the package references. Now I can use Azure resources definitions and AzureRm config:\npublic MainStack(Construct scope, string id) : base(scope, id) { // define resources here new AzurermProvider(this, \u0026#34;azurerm\u0026#34;, new AzurermProviderConfig { Features = new AzurermProviderFeatures() }); } Before I create the code for SQL Server VM, I need to familiarize myself with the process in Azure Portal. I prepared a new VM resource and reviewed the existing requirements.\nSome things to note for future reference:\nBasics I create VM without infrastructure redundancy (no VMSS, availability zones, and availability sets) Standard security type Image is Free SQL Server License: SQL 2019 Developer on Windows Server 2019 (open See all images to select it) size is Standard_D2s_v3 (~138 EUR/month) I allow 3389 (RDP) port for inbound traffic Disks Standard SSD is enough I want to delete discs when I remove the VM Networking I use Basic NIC network security group I want to delete public IP and NIC when I remove VM Management, Monitoring, and Advanced I keep all the defaults SQL Server settings I want only traffic within my Virtual Network (default) enable SQL authentication (because - why not?) default storage options (with default drives and disk types): SQL Data: 1024 GiB, 5000 IOPS, 200 MB/s SQL Log: 1024 GiB, 5000 IOPS, 200 MB/s SQL TempDb: Use local SSD drive\ndefault configuration for instance setting automated patching (default) Few words more about the discs setup. By default, separate drives for data, logs and tempdb are used on premium SSD storage:\nF:\\data - for databases (except system databases) G:\\log - for databases logs D:\\tempDb - for tempdb All the above means there is additional work to do to have the same setup for my VM.\nTaking a bit more speed\rAfter clicking the parameters in Azure Portal, I saved the ARM template in vm.create.json for reference and help when I\u0026rsquo;m stuck. I read mssql_virtua_machine documentation and configuration sample. They helped a lot, as I got more understanding of how the elements interact with each other.\nI will not say that writing the initial code was a breeze, but I got stuck only a few times and solved the issues with a trial-and-error approach.\nFirst, I need a resource group:\npublic MainStack(Construct scope, string id) : base(scope, id) { // define resources here // (...) ResourceGroup rg = new ResourceGroup(this, \u0026#34;rg\u0026#34;, new ResourceGroupConfig { Location = \u0026#34;West Europe\u0026#34;, Name = \u0026#34;VM-SSIS-RG\u0026#34; }); } The first annoyance: I need to define id for the element - \u0026ldquo;rg\u0026rdquo; (nothing special - it\u0026rsquo;s the default behaviour of terraform), but I also need to define the variable rg if I want to reference it later. I understand it. I just consider it an overhead.\nThen I decide to use Virtual Network. Pay attention - I reference the previously defined ResourceGroup rg variable to specify Location and ResourceGroupName parameters:\npublic MainStack(Construct scope, string id) : base(scope, id) { // define resources here // (...) VirtualNetwork n = new VirtualNetwork(this, \u0026#34;vnet\u0026#34;, new VirtualNetworkConfig { Location = rg.Location, ResourceGroupName = rg.Name, AddressSpace = new []{\u0026#34;10.2.0.0/16\u0026#34;}, Name = \u0026#34;vm-ssis-vnet\u0026#34; }); } Second annoyance: documentation is sparse, and if not the default samples, it would take me a lot more time to figure I should use an anonymous array.\nAbove I used references to ResourceGroup and VirtualNetwork, but to do that, I included the namespaces in the code:\nusing Constructs; using HashiCorp.Cdktf; using HashiCorp.Cdktf.Providers.Azurerm.MssqlVirtualMachine; using HashiCorp.Cdktf.Providers.Azurerm.NetworkInterface; using HashiCorp.Cdktf.Providers.Azurerm.NetworkSecurityGroup; using HashiCorp.Cdktf.Providers.Azurerm.NetworkSecurityRule; using HashiCorp.Cdktf.Providers.Azurerm.NetworkInterfaceSecurityGroupAssociation; using HashiCorp.Cdktf.Providers.Azurerm.Provider; using HashiCorp.Cdktf.Providers.Azurerm.PublicIp; using HashiCorp.Cdktf.Providers.Azurerm.ResourceGroup; using HashiCorp.Cdktf.Providers.Azurerm.Subnet; using HashiCorp.Cdktf.Providers.Azurerm.VirtualMachine; using HashiCorp.Cdktf.Providers.Azurerm.VirtualNetwork; Constructs and HashiCorp.CdkTf are included by default, but if I want my code to be clean - every resource has its own namespace.\nOnce I created a resource group, virtual network, subnet, IP, etc. I got the general pattern, and I liked it. As with classic terraform - you create a resource definition and then use it in subsequent elements.\nI have the code. What next?\rWhen adding the code, I frequently run dotnet build to be sure it works as expected. To run the code - the recommended way is to use cdktf CLI and run cdktf deploy \u0026lt;project-name\u0026gt;, where \u0026lt;project-name\u0026gt; is the one provided during project creation. If you don\u0026rsquo;t remember:\neither use cdktf list in the project folder or open Program.cs and check line new MainStack(app, \u0026quot;vm-ssis\u0026quot;); - there is the project name or open cdktf.out\\stacks folder - each project (stack) is a separate directory - pick the directory name During deployment, the code is built with dotnet run command (see cdktf.json for details), synthesized into JSON format in cdk.tf.json file, and deployed using terraform apply.\nTo remove the VM, run cdktf destroy \u0026lt;project-name\u0026gt;.\nLast annoyance\rCDKTF is in the early stage, and there are bugs in the AzureRm provider. One I found particularly annoying for SQL Server virtual machine was the TempDb configuration. With the classic terraform I have all the options from Azure Portal, but CDKTF allows only the same settings as for log and data files (taken from metadata):\n#region zestaw HashiCorp.Cdktf.Providers.Azurerm, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null // HashiCorp.Cdktf.Providers.Azurerm.dll #endregion #nullable enable using Amazon.JSII.Runtime.Deputy; namespace HashiCorp.Cdktf.Providers.Azurerm.MssqlVirtualMachine { [JsiiInterface(typeof(IMssqlVirtualMachineStorageConfigurationTempDbSettings), \u0026#34;@cdktf/provider-azurerm.mssqlVirtualMachine.MssqlVirtualMachineStorageConfigurationTempDbSettings\u0026#34;)] public interface IMssqlVirtualMachineStorageConfigurationTempDbSettings { // // Summary: // Docs at Terraform Registry: {@link https://www.terraform.io/docs/providers/azurerm/r/mssql_virtual_machine#default_file_path // MssqlVirtualMachine#default_file_path}. [JsiiProperty(\u0026#34;defaultFilePath\u0026#34;, \u0026#34;{\\\u0026#34;primitive\\\u0026#34;:\\\u0026#34;string\\\u0026#34;}\u0026#34;, false, false)] string DefaultFilePath { get; } // // Summary: // Docs at Terraform Registry: {@link https://www.terraform.io/docs/providers/azurerm/r/mssql_virtual_machine#luns // MssqlVirtualMachine#luns}. [JsiiProperty(\u0026#34;luns\u0026#34;, \u0026#34;{\\\u0026#34;collection\\\u0026#34;:{\\\u0026#34;elementtype\\\u0026#34;:{\\\u0026#34;primitive\\\u0026#34;:\\\u0026#34;number\\\u0026#34;},\\\u0026#34;kind\\\u0026#34;:\\\u0026#34;array\\\u0026#34;}}\u0026#34;, false, false)] double[] Luns { get; } } } I hope it will change soon, but be aware that today the problem exists.\n",
"ref": "/2022/10/31/build-your-sql-server-vm-using-cdktf/"
},{
"title": "End-to-End testing with Playwright, part II",
"date": "",
"description": "",
"body": "In the first part I created a sample Blazor app, installed Playwright and created first test. This post is to get more comfortable with Playwright.\nGetting more familiar\rJust to recall the test I wrote (validation of the element on a /counter web page):\nusing Microsoft.Playwright.NUnit; using System.Threading.Tasks; namespace BlazorApp.Tests; class MainPageTests : PageTest { public async Task CounterStartsWithZero() { // call to the `/counter` page await Page.GotoAsync(\u0026#34;http://localhost:5165/counter\u0026#34;); // search for the counter value var content = await Page.TextContentAsync(\u0026#34;p\u0026#34;); // assertion for the value Assert.Equals(\u0026#34;Current count: 0\u0026#34;, content); } } Writing subsequent tests is like copy/paste/edit - go to the page, do something, verify if it returns what you expected. As an example - check if I\u0026rsquo;m redirected to the /counter page when I click the link on the main page looks like the following:\n[Test] public async Task ClickingCounterRedirectsToCounterPage() { // call to the main page await Page.GotoAsync(\u0026#34;http://localhost:5165/\u0026#34;); // search for the counter link and click it await Page.ClickAsync(\u0026#34;text=Counter\u0026#34;); // verify redirection Assert.AreEqual(\u0026#34;xxx\u0026#34;, Page.Url); } Page.ClickAsync() is a void method, so to check whether I was redirected, I check the Page.Url property, as it is updated in the navigation lifecycle. As previously - the first test execution should fail (and also helps to verify what is returned). And it fails:\nExpected string length 3 but was 29. Strings differ at index 0. Expected: \u0026#34;xxx\u0026#34; But was: \u0026#34;http://localhost:5165/counter\u0026#34; -----------^ Page.Url returns the whole URL (I expected that) as a string. I could compare the absolute URLs, but I don\u0026rsquo;t want to. I will use System.Uri to get only a relative part:\n[Test] public async Task ClickingCounterRedirectsToCounterPage() { // call to the main page await Page.GotoAsync(\u0026#34;http://localhost:5165/\u0026#34;); // search for the counter link and click it await Page.ClickAsync(\u0026#34;text=Counter\u0026#34;); // verify redirection System.Uri pageUri = new System.Uri(Page.Url); Assert.AreEqual(\u0026#34;/counter\u0026#34;, pageUri.PathAndQuery); } OK, I get more or less how it works, and I can dig into the documentation to find out more commands I can use for testing. As an example I can verify other web page\u0026rsquo;s parameters or states (Page.Title, Page.ContentAsync(), Page.IsCheckedAsync()) and test any selector. When I think about a thing I would like to check or test - it\u0026rsquo;s all there. So it boils down to searching a method in the API documentation and using it. Simple. If you want to see more tests - check the repository.\nSo I\u0026rsquo;m going to verify something less obvious. The Weather forecast page has a table, and I will prepare some tests to validate it.\nI want to test four things about the table:\nDoes it have five rows with data? Is the header\u0026rsquo;s font 16px in size? Is the header\u0026rsquo;s font bold? Are the dates in ascending order? Enter the Eval world\rTo check the information about the given page element I can for example use EvalOnSelectorAsync() / EvalOnSelectorAllAsync() methods. The difference is that the former returns one element (first found in DOM, when there are multiple matches available) or all elements that match the selector. Both functions take at minimum two arguments: selector and expression, and the type it returns: Page.EvalOnSelectorAsync\u0026lt;type\u0026gt;(selector, expression).\nI can also use a Locator() method. It returns a Locator instance instead of ElementHandle as other selector methods (like ClickAsync(), or QuerySelectorAsync()). Locator is more strict, and forces you to return only one element (except for the CountAsync() method, which can handle multiple results). Also, Locator captures how to get to the element, and ElementHandle holds the handle to the element itself.\nAs an example - to find the numbers of rows in the table on the /fetchdata page, I can write it in a few ways:\nfind a table and get the row count (including header - returns 6):\nint tableRows = await Page.EvalOnSelectorAsync\u0026lt;int\u0026gt;(\u0026#34;//table\u0026#34;, \u0026#34;tbl =\u0026gt; tbl.rows.length\u0026#34;); find a table\u0026rsquo;s tbody element and get the row count (only data rows - all methods return 5)\nint tableRows = await Page.EvalOnSelectorAsync\u0026lt;int\u0026gt;(\u0026#34;//table/tbody\u0026#34;, \u0026#34;tbody =\u0026gt; tbody.childNodes.length\u0026#34;); // or int tableRows = await Page.EvalOnSelectorAsync\u0026lt;int\u0026gt;(\u0026#34;//table/tbody\u0026#34;, \u0026#34;tbody =\u0026gt; tbody.childElementCount\u0026#34;); // or int tableRows = await Page.EvalOnSelectorAsync\u0026lt;int\u0026gt;(\u0026#34;//table/tbody\u0026#34;, \u0026#34;tbody =\u0026gt; tbody.rows.length\u0026#34;); // or int tableRows = await Page.EvalOnSelectorAllAsync\u0026lt;int\u0026gt;(\u0026#34;//table/tbody/tr\u0026#34;, \u0026#34;rows =\u0026gt; rows.length\u0026#34;); // or int tableRows = await Page.Locator(\u0026#34;//table/tbody/tr\u0026#34;).CountAsync(); A short explanation of the code above: each command looks for element(s) that match the selectors (I used XPath selectors //table and //table/body). After the selector is matched, Playwright uses the found element(s) in the expression. The syntax used above utilises lambda expressions, meaning \u0026ldquo;hey, I have something, and I will refer to that something as the thing you see on the left side of the expression, and on the right, side I will show you what to do with it\u0026rdquo;. To look a bit closer, I will take Page.EvalOnSelectorAsync\u0026lt;int\u0026gt;(\u0026quot;//table/tbody\u0026quot;, \u0026quot;el =\u0026gt; el.rows.length\u0026quot;) as an example:\nPage.EvalOnSelectorAsync\u0026lt;int\u0026gt;(\u0026#34;//table/tbody\u0026#34;, \u0026#34;el =\u0026gt; el.rows.length\u0026#34;) // selector: \u0026#34;//table/tbody\u0026#34; // expression: \u0026#34;el =\u0026gt; el.rows.length\u0026#34; The expression means:\nthe selector found some object, I will call it el: \u0026ldquo;el =\u0026gt; \u0026hellip;\u0026rdquo; take this element el and get me its rows.length property: \u0026ldquo;\u0026hellip; =\u0026gt; el.rows.length\u0026rdquo; To find the dates, I use Locator and InnerTextAsync() to show the syntax. The XPath expressions find the rows, and DateTime operations set the formatted dates as the expected values. Not beautiful, but it works.\n[Test] public async Task TableDatesStartTomorrowAscending() { // call to the `/fetchdata` page await Page.GotoAsync(\u0026#34;http://localhost:5165/fetchdata\u0026#34;); // get number of table rows string date1 = await Page.Locator(\u0026#34;//table/tbody/tr[1]/td[1]\u0026#34;).InnerTextAsync(); string date2 = await Page.Locator(\u0026#34;//table/tbody/tr[2]/td[1]\u0026#34;).InnerTextAsync(); string date3 = await Page.Locator(\u0026#34;//table/tbody/tr[3]/td[1]\u0026#34;).InnerTextAsync(); string date4 = await Page.Locator(\u0026#34;//table/tbody/tr[4]/td[1]\u0026#34;).InnerTextAsync(); string date5 = await Page.Locator(\u0026#34;//table/tbody/tr[5]/td[1]\u0026#34;).InnerTextAsync(); // assertion for the value Assert.AreEqual(DateTime.Now.AddDays(1).ToString(\u0026#34;dd.MM.yyyy\u0026#34;), date1); Assert.AreEqual(DateTime.Now.AddDays(2).ToString(\u0026#34;dd.MM.yyyy\u0026#34;), date2); Assert.AreEqual(DateTime.Now.AddDays(3).ToString(\u0026#34;dd.MM.yyyy\u0026#34;), date3); Assert.AreEqual(DateTime.Now.AddDays(4).ToString(\u0026#34;dd.MM.yyyy\u0026#34;), date4); Assert.AreEqual(DateTime.Now.AddDays(5).ToString(\u0026#34;dd.MM.yyyy\u0026#34;), date5); } The last example is a test whether the header\u0026rsquo;s font is 16px. For this I modified an example from the official documentation and used window.getComputedStyle(cell).fontSize expression:\n[Test] public async Task TableHeaderHas16pxFont() { // call to the `/fetchdata` page await Page.GotoAsync(this.pageUrl); // get number of table rows string fontSize = await Page.EvalOnSelectorAsync\u0026lt;string\u0026gt;(\u0026#34;//table/thead/tr/th\u0026#34;, \u0026#34;cell =\u0026gt; window.getComputedStyle(cell).fontSize\u0026#34;); // assertion for the value Assert.AreEqual(\u0026#34;16px\u0026#34;, fontSize); } Not sure why, but sometimes the tests using window.getComputedStyle() failed when run for the first time. Rerunning them made the tests pass, but it looks unstable and is something to investigate later.\nA bit more about expressions\rI can tell which element I want to find, and I can use a few engines (text=... for searching by text, / or XPath= for XPath, css=... for CSS). I know I need to use lambda expressions for the expression part. But how can I check what I can test using the expression? How do I get these rows.length, childElementCount, or window.getComputedStyle(cell).fontSize?\nThe first option is to use the developer tools in the browser. I click the right mouse button on a given element and select Inspect. I confirm whether the console shows the correct part and then switch to the Properties tab.\nAbove, you see the console from the Brave browser. It shows the path to the \u0026lt;tbody\u0026gt; element, and I can see that the rows HTMLCollection has the length property. The Firefox browser also helps to find the elements using the Search option in the Inspector. Below you see the XPath expression for the \u0026lt;tr\u0026gt; elements in the \u0026lt;tbody\u0026gt;. It also shows 5 found occurrences.\nNote: you can also search within the Brave\u0026rsquo;s (and Edge\u0026rsquo;s/Chrome\u0026rsquo;s) tools by pressing Ctrl + F in the Elements tab\nThe second option is to go to the official MDN documentation (or other web API documentation of your choice). I use MDN for:\ninvestigating Document Object Model searching for JavaScript CSS properties reference Setting the URL once\rI\u0026rsquo;m sure you have noticed that I repeat the URL in every test. I don\u0026rsquo;t like it either, and it\u0026rsquo;s now the time to clean it up a bit. I create an additional Init() method for each test class and decorate it with the [Setup] attribute, so it will be called before each test. Inside, I set the pageUrl property, which is then used in the Page.GotoAsync() method:\nprivate string pageUrl = \u0026#34;\u0026#34;; [SetUp] public void Init() { pageUrl = \u0026#34;http://localhost:5165\u0026#34;; } [Test] public async Task PageTitleIsIndex() { // call to the `/counter` page await Page.GotoAsync(this.pageUrl); // get page title string title = await Page.TitleAsync(); // assertion for the value Assert.AreEqual(\u0026#34;Index\u0026#34;, title); } For now, I will leave like that. I don\u0026rsquo;t like the website\u0026rsquo;s hardcoded address and will fix it in the next part using external configuration.\nSidenotes on the header screenshot:\nI see the Run Test | Debug Test elements in VSCode only when I open the BlazorApp.Tests folder directly; it does not show when I open the parent folder or add the folder in the workspace (maybe some configuration is invalid/missing?); they show because I have C# VSCode extension from Microsoft installed I use .NET Core Test Explorer VSCode extension by Jun Han (more about it in the next part) I use Error Lens VSCode extension by Alexander Short summary\rPlaywright has a very clean API, and it gets swift to start using it Locator API looks more intuitive than API returning ElementHandle; I have to investigate and use a bit more When using window.getComputedStyle(cell) in the EvaluateAsync or EvalOnSelectorAsync I sometimes got errors (like: test returned \u0026lt;String.Empty\u0026gt;); after running the tests again they passed ",
"ref": "/2021/11/28/end-to-end-testing-with-playwright-part-ii/"
},{
"title": "End-to-End testing with Playwright, part I",
"date": "",
"description": "",
"body": "I started working with Playwright by accident. YouTube has shown me a recommendation - a short film by Nick Chapsas (yt | t) about testing user interfaces with SpecFlow and Playwright. While I admire the SpecFlow, BDD and Gherkin ideas - I still haven\u0026rsquo;t convinced myself to use them. But Playwright + C# have drawn my attention. Then I found a webinar recording with Andrey Lushnikov, and I was sold on Playwright.\nThe tutorials and blog posts about Playwright I\u0026rsquo;ve found focus on installing it, running code generator, and when it comes to testing - copy/paste the generated code, decorate it with [Test] attributes - and that\u0026rsquo;s it. Of course, it\u0026rsquo;s not that simplified, but overall that\u0026rsquo;s my impression. Most information I found and used is by reading the official documentation and the Playwright\u0026rsquo;s source code. This blog post is for anyone who wants to start writing clean tests using Playwright.\nPreparations\rAs we use Blazor in one of the applications I\u0026rsquo;m working with, I will use an example Blazor project to build end-to-end tests. You can develop your first Blazor app in minutes, and that\u0026rsquo;s what I want for a start. For future reference: I used VSCode 1.62.2 for development, .NET 6.0.100. (SDK), and Playwright 1.16.2.\nSo first - run the following commands in the shell of your choice (I use pwsh):\ndotnet new blazorserver -o BlazorApp --no-https cd BlazorApp dotnet new gitignore dotnet watch This should start the browser, and you should see a default Blazor app:\ndotnet new blazorserver -o BlazorApp --no-https creates a new application in the BlazorApp folder (-o) and disables HTTPS (--no-https), as I don\u0026rsquo;t need it in this example. dotnet new gitignore adds a default .gitignore file for .NET. dotnet watch restores dependencies, builds the project and waits for the code changes, to immediately reload the browser. The hot reload is not required - I could use just dotnet run, but I copied the earlier tutorial commands. Also - dotnet run does not run the browser, and you have to do it manually. And to run it manually, you have to know the exact address. You will find it in the \u0026lt;BlazorAppFolder\u0026gt;/Properties/launchSettings.json file in the properties section.\nNext - I will create a test project and install Playwright.\ncd .. dotnet new nunit -n BlazorApp.Tests cd BlazorApp.Tests dotnet tool install --global Microsoft.Playwright.CLI dotnet add package Microsoft.Playwright.NUnit dotnet new gitignore dotnet build playwright install Step by step:\nI go out of the BlazorApp directory, as I want my tests to be at the same level as my application I create a new project using NUnit template I want to configure the project, so I go into the directory I install Playwright globally I install the Playwright test adapter for NUnit I add the default .gitignore file I build the project (you will get the message Please make sure Playwright is installed and built prior to using Playwright tool if you don\u0026rsquo;t build the project first) I install the Playwright library and browsers (it can take some time) Now I can write the first UI test for Blazor.\nWriting first test - step by step\rThe BlazorApp.Tests contains the sample UnitTest1.cs file. I skip it for now, and I add my own empty file to start from scratch using command code MainPageTests.cs (in BlazorApp.Tests folder).\nAs I use .NET 6, I will start with declaring the namespace (see: https://docs.microsoft.com/en-us/dotnet/csharp/whats-new/csharp-10#file-scoped-namespace-declaration and https://docs.microsoft.com/en-us/dotnet/csharp/language-reference/keywords/namespace), and the test class:\nnamespace BlazorApp.Tests; class MainPageTests : PageTest { } Pay attention to : PageTest - it\u0026rsquo;s not just a signal we are building on the Playwright library. We also get all the required setup - under the covers, Playwright spins up a new browser and opens an empty page, so we can already start testing the web pages. It means - you don\u0026rsquo;t need to write the following code to wire everything up.\nusing var playwright = await Playwright.CreateAsync(); await using var browser = await playwright.Chromium.LaunchAsync(); var page = await browser.NewPageAsync(); You can write this code, but why bother?\nSide note: the Playwright\u0026rsquo;s object hierarchy is Playwright \u0026gt; Browser \u0026gt; Context \u0026gt; Page. You can inherit from all the hierarchy, but for the beginning, it\u0026rsquo;s good enough to just always start with a new empty browser page. To read more - take a look at the base NUnit classes documentation.\nTo make the compiler aware of what the PageTest is, I add the using directive:\nusing Microsoft.Playwright.NUnit; namespace BlazorApp.Tests; class MainPageTests : PageTest { } When writing the Playwright tests - all test methods must be of type public async Task. It implies that we use the System.Threading.Tasks namespace, so I add it to the using section and write my first test skeleton:\nusing Microsoft.Playwright.NUnit; using System.Threading.Tasks; namespace BlazorApp.Tests; class MainPageTests : PageTest { public async Task CounterStartsWithZero() { } } I want to check whether the counter on the /counter page has an initial value of zero. When you visit the /counter page, you see it has, but we are writing the test to be sure.\nThe test will contain three elements:\ncall the /counter page search for the counter value (to make things simple find the \u0026lt;p/\u0026gt; element, as it\u0026rsquo;s the only paragraph on the page) assertion for the value As the first test should always fail, I will check whether the counter value equals 42:\nusing Microsoft.Playwright.NUnit; using System.Threading.Tasks; namespace BlazorApp.Tests; class MainPageTests : PageTest { public async Task CounterStartsWithZero() { // call to the `/counter` page await Page.GotoAsync(\u0026#34;http://localhost:5165/counter\u0026#34;); // search for the counter value var content = await Page.TextContentAsync(\u0026#34;p\u0026#34;); // assertion for the value Assert.Equals(\u0026#34;Current count: 42\u0026#34;, content); } } Last two things: Assert is a part of the NUnit library, so I need to add a namespace for it (or use the full name with the namespace, but I don\u0026rsquo;t want to). Also - NUnit does not know that my method is a test, so I need to decorate it with the [Test] attribute:\nusing Microsoft.Playwright.NUnit; using NUnit.Framework; using System.Threading.Tasks; namespace BlazorApp.Tests; class MainPageTests : PageTest { [Test] public async Task CounterStartsWithZero() { // call to the `/counter` page await Page.GotoAsync(\u0026#34;http://localhost:5165/counter\u0026#34;); // search for the counter value var content = await Page.TextContentAsync(\u0026#34;p\u0026#34;); // assertion for the value Assert.AreEqual(\u0026#34;Current count: 42\u0026#34;, content); } } Running the tests\rTo run all the tests, use the dotnet test command. You should get an error:\nExpected string length 17 but was 16. Strings differ at index 15. Expected: \u0026#34;Current count: 42\u0026#34; But was: \u0026#34;Current count: 0\u0026#34; --------------------------^ That\u0026rsquo;s great - the test was supposed to fail, so to fix it, change the assertion. The full MainPageTests.cs file should look following:\nusing Microsoft.Playwright.NUnit; using NUnit.Framework; using System.Threading.Tasks; namespace BlazorApp.Tests; class MainPageTests : PageTest { [Test] public async Task CounterStartsWithZero() { // call to the `/counter` page await Page.GotoAsync(\u0026#34;http://localhost:5165/counter\u0026#34;); // search for the counter value var content = await Page.TextContentAsync(\u0026#34;p\u0026#34;); // assertion for the value Assert.AreEqual(\u0026#34;Current count: 0\u0026#34;, content); } } Congratulations. Your first Playwright test passed.\nThings to remember and pay attention to\rAlways use await when calling the Playwright methods (e.g. await Page.GotoAsync(\u0026quot;http://localhost:5165/counter\u0026quot;);) If the test class inherits from the PageTest - you have a Page object ready for the service (the first letter is capital P) Be careful - many tutorials build on the unnecessary browser initiation and use the page object (all small letters) - you will get errors if you just copy/paste parts of their code Have fun!\n",
"ref": "/2021/11/13/end-to-end-testing-with-playwright-part-i/"
},{
"title": "Read Data From Azure Blob Storage to Azure SQL Database",
"date": "",
"description": "",
"body": "In one of the projects, we store source files as JSON data on Azure Blob Storage. These files must be loaded into Azure SQL Database. We use a .NET code to perform this, but how could I load the files directly into Azure SQL Database?\nThe environment\rNothing extraordinary: a default storage account bartekr01stacc configured with local redundancy (LRS) + SQL Database VariousExamples located within bartekr01 Azure SQL Server. The storage account has json-data-container container (set as private) and json-data-share file share.\nBoth locations contain the same file with 3 records - a name, a value and an array of names.\n[ { \u0026#34;name\u0026#34; : \u0026#34;Name1\u0026#34;, \u0026#34;value\u0026#34; : 10.5, \u0026#34;attributes\u0026#34; : [\u0026#34;a1\u0026#34;, \u0026#34;a2\u0026#34;] }, { \u0026#34;name\u0026#34; : \u0026#34;Name2\u0026#34;, \u0026#34;value\u0026#34; : 20.5, \u0026#34;attributes\u0026#34; : [\u0026#34;a1\u0026#34;, \u0026#34;a3\u0026#34;] }, { \u0026#34;name\u0026#34; : \u0026#34;Name3\u0026#34;, \u0026#34;value\u0026#34; : 30.5, \u0026#34;attributes\u0026#34; : [\u0026#34;a2\u0026#34;, \u0026#34;a4\u0026#34;] } ] The steps to import data\rFor future reference - I found this example on GitHub\nCreate database master key (if none exists) Create a credential for access to the container Create data source - a pointer to the blob storage container Load data Profit! The first two steps are optional if access to the blob storage container is public.\nTo verify if I already have a Database Master Key (DMK), database scoped credential and data source, I can use system views:\nSELECT * FROM sys.symmetric_keys; SELECT * FROM sys.database_scoped_credentials; SELECT * FROM sys.external_data_sources; Import\rThe data is stored in a private container, so I will use a stored credential. As for now (June 2021), I can use only SAS (Shared Access Signature), so I create one in Azure Portal. Read access is enough.\nOther options - using AZ CLI (I use PowerShell as the default shell, hence $ in front of a variable name)\n$AZURE_STORAGE_CONNECTION_STRING = \u0026#39;DefaultEndpointsProtocol=https;AccountName=bartekr01stacc;AccountKey=S(...);EndpointSuffix=core.windows.net\u0026#39; az storage container generate-sas --account-name bartekr01stacc --name json-data-container --https-only --permissions r --expiry 2021-06-19T23:59:59Z --connection-string $AZURE_STORAGE_CONNECTION_STRING and Az PowerShell module\n$ACCESSKEY = \u0026#39;S(...)\u0026#39; $c = New-AzStorageContext -StorageAccountName bartekr01stacc -StorageAccountKey $ACCESSKEY New-AzStorageContainerSASToken -Permission r -Protocol HttpsOnly -ExpiryTime 2021-06-19T23:59:59Z -Context $c -Name json-data-container Having the SAS the rest is almost straightforward.\n-- we need a database master key for encryption, create one CREATE MASTER KEY ENCRYPTION BY PASSWORD = \u0026#39;\u0026lt;password, should be strong\u0026gt;\u0026#39;; -- crete a credential -- NOTE: DO NOT PUT FIRST CHARACTER \u0026#39;?\u0026#39;\u0026#39; IN SECRET!!! CREATE DATABASE SCOPED CREDENTIAL bartekr01staccCredential WITH IDENTITY = \u0026#39;SHARED ACCESS SIGNATURE\u0026#39;, SECRET = \u0026#39;\u0026lt;copy/paste generated secret (without initial ? - if exists)\u0026gt;\u0026#39; ; -- point to the destination (and store it as data source) CREATE EXTERNAL DATA SOURCE bartekr01stacc WITH ( TYPE = BLOB_STORAGE, LOCATION = \u0026#39;https://bartekr01stacc.blob.core.windows.net/json-data-container\u0026#39;, CREDENTIAL = bartekr01staccCredential --\u0026gt; CREDENTIAL is not required if a blob storage is public! ); -- https://bartekr01stacc.blob.core.windows.net/json-data-container/sample-data.json SELECT * FROM OPENROWSET ( BULK \u0026#39;sample-data.json\u0026#39;, DATA_SOURCE = \u0026#39;bartekr01stacc\u0026#39;, SINGLE_CLOB ) x CROSS APPLY OPENJSON(x.BulkColumn) WITH ( name CHAR(5), value NUMERIC(3, 1), attributes NVARCHAR(MAX) AS JSON ) j Observations:\nwhen generating SAS from the Azure Portal, pay attention to the start date - you might get access errors like I did when I incidentally switched the start time zone to UTC instead of my local time (I would have to wait an hour for it to start working) this problem should not occur using AZ CLI or AZ module, as it assumes the SAS generation datetime as the start date (if not provided) when using OPENJSON - when I want the array to be available for further processing, I have to use AS JSON, and it implies using NVARCHAR(MAX) as the data type if you update the credential (using ALTER DATABASE CREDENTIAL), you don\u0026rsquo;t have to recreate the external data source. Result:\n",
"ref": "/2021/06/13/read-data-from-azure-blob-storage-to-azure-sql-database/"
},{
"title": "Build Yourself a Web App in Azure",
"date": "",
"description": "",
"body": " NOTE: Recently (around 19th February 2021), the basic web app architecture got updated. The new version shows more possibilities, as it uses Azure Key Vault, Azure Monitor and introduces deployment slots.\nWhen you want to build an Azure-hosted web application that will use Azure SQL Database - it looks easy. You take a Basic Web App reference architecture, and you are set. It even has the ARM Template to deploy to have a working environment in just a few clicks.\nBut then, you might start thinking. \u0026ldquo;How secure it is for the enterprise application\u0026rdquo;? \u0026ldquo;What else should I learn to check my options\u0026rdquo;? And that\u0026rsquo;s how this series started. I wanted to build a secure environment for the web application hosted in Azure. It\u0026rsquo;s also a perfect opportunity to learn a lot about Azure architecture and components.\nBuilding a secure web app in Azure\r\u0026hellip; without Application Service Environment (ASE). I will start with the basic web app reference architecture, and then I will add some elements to make it more secure. The components I want to use are:\nAzure App Service Web Application (in App Service Plan) Application Gateway Azure SQL Database Azure Key Vault Azure Monitor Azure Blob Storage I will explore the options on how to connect between them, how to use SSL, networking options (service endpoints, private endpoints, subnet delegation), managed identities, RBAC, \u0026hellip;\nThe architecture I want to use is almost a new basic web app architecture (see below), plus the Application Gateway before the Web App.\nQuite a journey ahead.\nimages: Azure Documentation - Azure Architecture Center - Basic Web Application\n",
"ref": "/2021/02/28/build-yourself-a-web-app-in-azure/"
},{
"title": "Testing Utterances Comments",
"date": "",
"description": "",
"body": "When I switched to Hugo, I knew my new blog would not have the comments enabled. I planned it \u0026ldquo;for later\u0026rdquo;, as I didn\u0026rsquo;t want to use Disqus (available by default in Hugo).\nWhen I chose the hugo-future-imperfect-slim theme, I saw I could integrate Staticman, which creates the comments as the Pull Request in GitHub. Brilliant idea, but it involves some additional setup (either authorising Staticman GitHub account or hosting an own API instance). That\u0026rsquo;s why I skipped it when I migrated to Hugo. I decided to go back to the comments once I am more prepared for it.\nA month later\rI started reading more about Staticman integration, nested comments, using an Azure App Service or converting to Azure Function (excellent idea) and so on. But - after some internal tests, I was not amazed. Even more - I didn\u0026rsquo;t like it. Yes, it was doing what it was supposed to do, but I wanted a Wow effect. Something easy to set up, looking nice and something that makes me feel comfortable - \u0026ldquo;yes, this one fits perfectly\u0026rdquo;. So I started looking for the alternatives and this time I found something - use GitHub Issues system as the comments.\nThe pros - I want something out-of-the-box, what works, what would be easy to use by the technical people and give me no headaches. The cons - tight GitHub integration, to comment, users must use a GitHub account.\nI found two implementations:\nutterances custom API calls The first is something I decided to try. The second is something to have in mind for the future.\nThe setup\rThe documentation is clear:\nInstall the app in the repository used for the comments (must be public) Decide what the issue name will look like (I chose the full post URL) Optionally set the (already existing) label for the issue (I use blog-comment) Optionally select a theme All those steps fill the template to paste in the comments page. In my case - I have overwritten the comments.html used in the theme.\n\u0026lt;script src=\u0026#34;https://utteranc.es/client.js\u0026#34; repo=\u0026#34;[ENTER REPO HERE]\u0026#34; issue-term=\u0026#34;pathname\u0026#34; theme=\u0026#34;github-light\u0026#34; crossorigin=\u0026#34;anonymous\u0026#34; async\u0026gt; \u0026lt;/script\u0026gt; My setup:\n\u0026lt;script src=\u0026#34;https://utteranc.es/client.js\u0026#34; repo=\u0026#34;BartekR/bartekr.github.io\u0026#34; issue-term=\u0026#34;url\u0026#34; label=\u0026#34;blog-comment\u0026#34; theme=\u0026#34;github-light\u0026#34; crossorigin=\u0026#34;anonymous\u0026#34; async\u0026gt; \u0026lt;/script\u0026gt; For now - it\u0026rsquo;s to check whether it\u0026rsquo;s useful, maintainable and so on. I still wonder if I should port the comments from the previous WordPress blog (I think I should) and how to do it. I\u0026rsquo;ll wait to see how the situation develops.\n",
"ref": "/2021/01/17/testing-utterances-comments/"
},{
"title": "Advent of Code 2020",
"date": "",
"description": "",
"body": "This year I took part in \u0026ldquo;Advent of Code\u0026rdquo; - a challenge with the series of puzzles to solve using any programming language. I tried two years ago but resigned after the first day. This year was different, as we set the internal leaderboards, and I had a motivation to test my skills. My initial idea was to use only the PowerShell, but after some talks, I thought \u0026ldquo;maybe it\u0026rsquo;s a good moment to start learning go lang\u0026rdquo;?\nThe answer was: no.\nI started solving puzzles with PowerShell and learned go along. But as puzzles began to be more difficult, I focused only on PowerShell. go has to wait a bit.\nI collected 28 stars out of 50 possible. Which I think is not that bad result. (You see 29 on the picture in the header because I started searching how others solved the problems and implemented first overdue task).\nBut - to the point. Before AoC I thought I know and understand PowerShell pretty well. During the AoC, I had to revisit it. Some tasks I usually do without thinking started to cause troubles when solving the puzzles. Like: hashtables didn\u0026rsquo;t want to cooperate with numbers, or: read the file; the first part is X, the second is Y. I was too much used to my default set of techniques, and it was sometimes hard to think outside the box.\nThis post summarises what I learned (or reminded) during the AoC, sometimes with links for the broader explanation.\n1. Reading files\rNot much new stuff. As usual - use Get-Content, but become more familiar with -Raw parameter to read all data as one piece of text instead a string[] array. The -Raw parameter allows easy splitting data in the file that has to be separated via empty line. Like:\nline1;val1;val2 line2;val1;val2 part2:val1,val2 part2:val3:val4 part3|abc part3|def To separate the above code into three separate elements, use \u0026ldquo;-split \u0026ldquo;`r\u0026rsquo;n\u0026rsquo;r\u0026rsquo;n\u0026rdquo; \u0026ldquo;. Remember to use double quotes.\n$customDeclarations = Get-Content \u0026#34;$PSScriptRoot\\CustomDeclarations.txt\u0026#34; -Raw $cd = $customDeclarations -split \u0026#34;`r`n`r`n\u0026#34; 2. Named regex\rWhen using -match we get $Matches array with the numbered matches. Say we have these lines (taken from my Day02 puzzle input):\n3-7 r: mxvlzcjrsqst 1-3 c: ccpc 6-12 f: mqcccdhxfbrhfpf The task was to check if the letter appears between X and Y number of times in the password. Looking at the first line:\n3-7 min/max appearances r letter mxvlzcjrsqst password We can read data using regex like (\\d+)-(\\d+) ([a-z]): ([a-z]+), but we have to remember the indices of the groups, like $Matches[3] means the third group (the letter a-z).\n$passwords = Get-Content .\\Passwords.txt -First 3 $passwords | ForEach-Object { $_ -match \u0026#39;(\\d+)-(\\d+) ([a-z]): ([a-z]+)\u0026#39; | Out-Null $Matches } \u0026lt;# Name Value ---- ----- 4 mxvlzcjrsqst 3 r 2 7 1 3 0 3-7 r: mxvlzcjrsqst 4 ccpc 3 c 2 3 1 1 0 1-3 c: ccpc 4 mqcccdhxfbrhfpf 3 f 2 12 1 6 0 6-12 f: mqcccdhxfbrhfpf #\u0026gt; Instead, we can use named references in regexes using ?\u0026lt;name\u0026gt; construction before the pattern, like:\n(?\u0026lt;minLength\u0026gt;\\d+)-(?\u0026lt;maxLength\u0026gt;\\d+) (?\u0026lt;letter\u0026gt;[a-z]): (?\u0026lt;password\u0026gt;[a-z]+)\n$passwords = Get-Content .\\Passwords.txt -First 3 $passwords | ForEach-Object { $_ -match \u0026#39;(?\u0026lt;minLength\u0026gt;\\d+)-(?\u0026lt;maxLength\u0026gt;\\d+) (?\u0026lt;letter\u0026gt;[a-z]): (?\u0026lt;password\u0026gt;[a-z]+)\u0026#39; | Out-Null $Matches } \u0026lt;# Name Value ---- ----- minLength 3 maxLength 7 letter r password mxvlzcjrsqst 0 3-7 r: mxvlzcjrsqst minLength 1 maxLength 3 letter c password ccpc 0 1-3 c: ccpc minLength 6 maxLength 12 letter f password mqcccdhxfbrhfpf 0 6-12 f: mqcccdhxfbrhfpf #\u0026gt; The Out-Null prevents the -match result to appear on the screen (True or False)\n3. Join lines\rHow to join a few lines in one? Use -replace. Again - remember about double quotes.\n$lines = \u0026#39; hgt:176cm iyr:2013 hcl:#fffffd ecl:amb byr:2000 eyr:2034 cid:89 pid:934693255 \u0026#39; $lines -replace \u0026#34;`n\u0026#34;, \u0026#39;;\u0026#39; # or: $lines -replace \u0026#34;`r`n\u0026#34;, \u0026#39;;\u0026#39; # hgt:176cm;iyr:2013;hcl:#fffffd ecl:amb;byr:2000;eyr:2034;cid:89 pid:934693255 4. Sort array of strings as numbers\rAoC had almost all the puzzle inputs in the separate files. So I created the input files for each day and read it using Get-Content. Some files contained a series of numbers, and when we read data from a file, we get all as a string. So when I wanted to sort the array I read from the file, I got unexpected results.\n# simulating array read from file $a = [string[]]@(2, 3, 1, 11, 15, 21) $a | Sort-Object \u0026lt;# 1 11 15 2 21 3 #\u0026gt; PowerShell is not aware that those strings are numbers, so it orders them as strings. To sort as the number take a look in the documentation and use ScriptBlock as the -Parameter:\n# simulating array read from file $a = [string[]]@(2, 3, 1, 11, 15, 21) $a | Sort-Object { [int]$_ } \u0026lt;# 1 2 3 11 15 21 #\u0026gt; The Stack Overflow answer has a bit more about it and led me to the documentation.\n5. Expanding arrays\rIt\u0026rsquo;s not a PowerShell trick or feature. Day 11 had a calculation of seats, and one of the tricks was getting info around the corners and edges. Like on a chequerboard - you have 64 squares. Each of them - excluding the edges - have 8 adjacent fields. The corner has three, and the edge has five. To check all the fields, you have to consider edges and corners as a different case. And it adds an overhead to the code.\nIt\u0026rsquo;s easier to add a \u0026ldquo;border\u0026rdquo; to the lattice (again: think chequerboard), and analyse the original data. Like this:\n# original, 10 x 10 L.LL.LL.LL LLLLLLL.LL L.L.L..L.. LLLL.LL.LL L.LL.LL.LL L.LLLLL.LL ..L.L..... LLLLLLLLLL L.LLLLLL.L L.LLLLL.LL # with border (using dots), 12 x 12 ............ .L.LL.LL.LL. .LLLLLLL.LL. .L.L.L..L... .LLLL.LL.LL. .L.LL.LL.LL. .L.LLLLL.LL. ...L.L...... .LLLLLLLLLL. .L.LLLLLL.L. .L.LLLLL.LL. ............ Now my code will look clearer as I don\u0026rsquo;t use additional ifs or switches.\nTo add the border, I used this code:\n# $seats0 is the original lattice # add top and bottom border, the same length as the original row $seats = @(\u0026#39;.\u0026#39; * $columns) + $seats0 + @(\u0026#39;.\u0026#39; * $columns) # add left and right border to each row (including boundaries) for($i = 0; $i -le $rows + 1; $i ++) { $seats[$i] = \u0026#39;.\u0026#39; + $seats[$i] + \u0026#39;.\u0026#39; } 6. Hashtables and keys\rAlways be aware of the datatypes. A standard case with numbers:\n$n = @(65, 66, 67, 68, 69) $h = @{} $n | ForEach-Object {$h[$_] = [char]$_} $h \u0026lt;# Name Value ---- ----- 69 E 68 D 67 C 66 B 65 A #\u0026gt; $h[69] # E $h.69 # E $h.\u0026#39;69\u0026#39; #(nothing) But with the array of numbers as strings:\n$n1 = @(\u0026#39;65\u0026#39;, \u0026#39;66\u0026#39;, \u0026#39;67\u0026#39;, \u0026#39;68\u0026#39;, \u0026#39;69\u0026#39;) $h1 = @{} $n1 | ForEach-Object {$h1[$_] = [char][int]$_} \u0026lt;# Name Value ---- ----- 67 C 66 B 65 A 68 D 69 E #\u0026gt; $h1[67] # (nothing) $h1.67 # (nothing) $h1.\u0026#39;67\u0026#39; # C $h1[\u0026#39;67\u0026#39;] # C Looks the same, but I had to use a string key, not a numeric, because of a different type.\n7. Pre-fill an array with values\rSometimes I wanted to have an array with prefilled values. Like 10 values of 0:\n$a = @(0, 0, 0, 0, 0, 0, 0, 0, 0, 0) What if I know only the number of elements in an array (as a variable)? The fastest version:\n$n = 57 $a = ,0 * $n Also worth reading - an article on SimpleTalk.\n8. Convert a number to a binary string\rUse [Convert]::ToString($number, 2). Works with other bases too.\n[Convert]::ToString(15, 2) # 1111 [Convert]::ToString(15, 8) # 17 [Convert]::ToString(15, 16) # f 9. Pad numbers with zeroes\rI want to prefix my number(s) with leading zeroes (like: 000000015), and keep the lengths consistent - all prefixed numbers should have a length of 10. I used it to visualise a bitmask but works for every number.\nUse formatting: '{0:d10}' -f $number; important: it has to be a number:\n$a = \u0026#39;15\u0026#39; \u0026#39;{0:d10}\u0026#39; -f $a # 15 \u0026#39;{0:d10}\u0026#39; -f [int]$a # 0000000015 Use $number.PadLeft(10, '0'); this time $number has to be a string:\n\u0026#39;15\u0026#39;.PadLeft(10, \u0026#39;0\u0026#39;) # 0000000015 $b = 15 $b.PadLeft(10, \u0026#39;0\u0026#39;) # InvalidOperation: Method invocation failed because [System.Int32] does not contain a method named \u0026#39;PadLeft\u0026#39;. 15.PadLeft(10, \u0026#39;0\u0026#39;) # 15.PadLeft: The term \u0026#39;15.PadLeft\u0026#39; is not recognized as a name of a cmdlet, function, script file, or executable program. # Check the spelling of the name, or if a path was included, verify that the path is correct and try again. (15).PadLeft(10, \u0026#39;0\u0026#39;) #InvalidOperation: Method invocation failed because [System.Int32] does not contain a method named \u0026#39;PadLeft\u0026#39;. Use ToString('0000000000'); works only with numbers:\n15.ToString(\u0026#39;0000000000\u0026#39;) # 15.ToString: The term \u0026#39;15.ToString\u0026#39; is not recognized as a name of a cmdlet, function, script file, or executable program. # Check the spelling of the name, or if a path was included, verify that the path is correct and try again. (15).ToString(\u0026#39;0000000000\u0026#39;) # 0000000015 \u0026#39;15\u0026#39;.ToString(\u0026#39;0000000000\u0026#39;) # MethodException: Cannot find an overload for \u0026#34;ToString\u0026#34; and the argument count: \u0026#34;1\u0026#34;. The list may expand in the future, as I plan to finish Advent of Code 2020, hopefully before AoC 2021.\n",
"ref": "/2020/12/31/advent-of-code-2020/"
},{
"title": "Migrating to GitHub Pages with Hugo",
"date": "",
"description": "",
"body": "Decision was made. And announced. And deadline set - so I will look stupid if I don\u0026rsquo;t do it. Something that was a strong decision started weakening. But setting a goal, and telling your friends about it is a strong motivator. Not to mention the WordPress annoyances during the transition period.\nFirst hurdles\rI chose Hugo by accident. At first I wanted to use Jekyll - I worked with Ruby long time ago and thought it might be a good idea to return to those times and refresh knowledge a bit. So I started reading how to migrate my WordPress blog to Jekyll. And I found a comment about Hugo - that it\u0026rsquo;s a lot faster. Pff. \u0026ldquo;It\u0026rsquo;s written in Go\u0026rdquo;. Say no more! Go is very high on my list \u0026ldquo;learn to use it in the next two years\u0026rdquo; - Hugo then!\nI started reading how Hugo works, how the themes are used, how is it built, how to use shortcodes, how to \u0026hellip; Wow. It was a lot to read. And I thought \u0026ldquo;well, maybe it was not as great idea as I thought first\u0026rdquo;? Yeah. I see you understand. But I wanted to do it, so I started following some tutorials, read more about the transition, chose few themes and tested it.\nOne thing I have to say about the themes: they are the big pain in the ass.\nAs I tested one theme and found some things i did not like - I tested another theme. And suddenly all stopped working. I read the docs, analysed source code and examples, adjusted - tadam! It works. But I\u0026rsquo;m not 100% convinced. Third theme - again, a lot of digging in the code, changing the settings, front matter (the page metadata), different custom shortcodes, \u0026hellip; F*ck!\nBut then I found a migration guide from Yeo Kheng Meng. Nice, easy, not 100% bulletproof but manageable. And the theme looks nice. And it finally clicked.\nSo I followed the steps from the tutorial\nStep 1. Export\rExport the XML content of the blog. Go to Tools \u0026gt; Export / Choose what to export: All content.\nStep 2. Conversion to Markdown\rI converted the posts to the Markdown format using wordpress-export-to-markdown by lonekorean and blog2md by palaniraja.\nWhy two conversions? The first easily converts to Markdown, can save in year/month/name hierarchy and download all images. It also has some issues, like not extracting categories or tags. Also it did not extract Pages - only Posts. I used the second to get the taxonomy and the Pages content. Also - each template I tried had a different set of front matter tags. I had to review and prepare them on my own.\nI had to also find a way how to handle centering images, or make them float left or right. I found a way using CSS and adding #something in image\u0026rsquo;s src attribute.\nStep 3. Fixing the Markdown\rThe tools convert the WordPress content to Markdown format to a certain degree. I had to review and fix some content, but as I wrote less post than I expected it did not take long to re-read all of them and apply the fixes:\nescaped underscore and star - every time I\u0026rsquo;ve seen \\_ in place of _, and \\* instead of * flawed source code snippets - not always the \u0026lt;pre\u0026gt;...\u0026lt;/pre\u0026gt; section was transformed correctly; but it was an easy fix (yet, required careful analysis where it starts and ends) image links always targeted the WordPress blog addres - removed the blog url tables had to be prepared form scratch - the data was converted to separate rows Step 4. Fix permalinks\rMy blog used YYYY/MM/DD/slug format for posts - like https://blog.bartekr.net/2020/09/24/using-the-system-oauth-token-in-azure-devops/. The conversion and the theme by default create the post/YYYY/MM/slug folder structure, like https://blog.bartekr.net/post/2020/11/automate-state-transitions-in-azure-devops/. To make it work I added the permalinks section to the config.toml:\n[permalinks] post = \u0026#34;:year/:month/:day/:title\u0026#34; And - from conversion perspective - that was basically it! Last thing to do - the comments. A lot of migration posts suggest to use Disqus, but I did not want to use an external source for the comments. So I\u0026rsquo;ve read about Staticman - something I saw in the theme configuration. And as I have only 40 approved comments - I thought I give it a try after I will finish the transition.\nStep 5. Point the domain to GitHub\rA two-step process.\nAdd CNAME record to the domain configuration Add the domain name to GitHub Pages configuration Done.\n",
"ref": "/2020/12/28/migrating-to-github-pages-with-hugo/"
},{
"title": "Migrating to GitHub Pages",
"date": "",
"description": "",
"body": "I blog with WordPress since the 17th of February 2009. At first, I used a blog-as-a-service approach, as I wanted just to start writing and see how it goes, not building all the infrastructure. I was writing for myself, to document interesting things I found during my adventures with SQL Server.\nThen, in 2017 I decided to switch to an English blog. I wanted to do the next step, and as I was helping run other WordPress-based websites, I started my own hosted blog – again using WordPress.\nAfter the next three years (and a half) I got fed up with it. All the interface changes, plugins updates ruining the websites, the editor that started to hamper instead of help in writing – I started looking for alternatives. The best, if it would support MarkDown, as I write almost all my documents and notes using .md files *).\nAnd I found GitHub Pages. I’ve read some comments, tutorials about migration, using Jekyll and all that stuff, but decided to go with Hugo.\nIt will take some time to migrate all the things to a new format. So I will do a two-step process: prepare a new blog post using Hugo, and then I will start migrating the content. Because I know if I start with the migration it will take a lot of time until I will have written a new blog post. And I already work on a few new posts.\nThere’s also an additional benefit. In June 2020 I started a new work and the new path of the “DevOps Engineer”. I don’t like this title, as for me DevOps == TeamWork, but that’s the title I got. I prefer to call myself “the one that works towards automating the stuff”, but it does not sound as fancy as “DevOps Engineer”. But to the point – I want to know better automation mechanisms behind the GitHub, so it’s also a great way to force myself to learn more about automation outside of the Azure stack.\nSo – that’s it. I wish myself luck to finish the transition until the end of 2020.\nFingers crossed.\n*) I write mostly in VSCode + Markdown Preview Enhanced plugin, but also I use Obsidian and Zettlr for my private notes. Also – for learning – I use RemNote. Not exactly markdown based, but since I started investing more time in learning how to learn - it really helps me to understand and learn better.\n",
"ref": "/2020/11/01/migrating-to-github-pages/"
},{
"title": "Using the system OAuth token in Azure DevOps",
"date": "",
"description": "",
"body": "One of the new YAML pipeline steps I prepared recently involved interaction with work items. I wanted to add the comments to the task (with the task ID extracted from some file). So, I created a PowerShell step that was executing Invoke-WebRequest (with try/catch logic, obviously), the process finished successfully, but nothing happened. I mean - the comments were not there. Uhmmmm, why?! The log analysis gave me a slight hint about what was wrong (as seen in the post header picture):\nStatusCode : 203 StatusDescription : NonAuthoritativeInformation The problem began because I wanted to adjust the code from the other pipeline that would fit my needs, but it didn\u0026rsquo;t want to work in my project. I googled for the answers, fiddled with tokens, authorization headers, setting pipeline options and variables - nothing helped. I got even more errors and problems. I had to take one step back to go two steps forward.\nTo better understand the problem I created a two brand new separate Azure DevOps projects and built two pipelines: one using the classic interface, the second using YAML. Also, I made the testing easier and I just read all the comments.\nWhy two projects? To test how it works when I call to the task in the same project and the task outside the project. The initial problem I wanted to solve was running the build pipeline in project X and add the comments to the task in the project Y. But if I wanted to understand the mechanics from the ground up and find the solution I had to begin with smaller steps.\nSo, build me a classic pipeline\rMy setup is just one PowerShell script calling the task address using Invoke-WebRequest (or Invoke-RestMethod). The script uses two parameters - project name and task id, as I will test two cases and don\u0026rsquo;t want to prepare hardcoded values for both tests.\nparam( $project = \u0026#39;CrossProjectTest01\u0026#39;, $workitem = 26 ) $uri = \u0026#34;https://dev.azure.com/bartekr/$project/_apis/wit/workItems/$workitem/comments?api-version=6.0-preview.3\u0026#34; Write-Host \u0026#34;Uri $uri\u0026#34; Write-Host \u0026#34;AccessToken: $env:SYSTEM_ACCESSTOKEN\u0026#34; $result = Invoke-WebRequest \\` -Uri $uri \\` -Method GET \\` -Headers @{ Authorization = \u0026#34;Bearer $env:SYSTEM_ACCESSTOKEN\u0026#34; } \\` -UseBasicParsing $result The most important part of the code is -Headers @{ Authorization = \u0026quot;Bearer $env:SYSTEM_ACCESSTOKEN\u0026quot; }. It involves a special system OAuth token available during the job execution. We could use a PAT (Personal Access Token) with required privileges to do the same, but why create one if we have already one at hand? Also pay attention that there is a Bearer keyword, not a Basic auth.\nUsing a default setup (create an empty classic pipeline, add steps) I get the output as described at the beginning of the post:\nUri https://dev.azure.com/bartekr/CrossProjectTest01/_apis/wit/workItems/26/comments?api-version=6.0-preview.3 AccessToken: StatusCode : 203 StatusDescription : Non-Authoritative Information The access token has no value - that explains why there is no access (203 code). To fix it - edit the pipeline, go to \u0026ldquo;Run on agent\u0026rdquo; job settings, scroll down and check the \u0026ldquo;Allow scripts to access the OAuth token\u0026rdquo; option.\nNow the job will finish as expected - the System.AccessToken is visible to the process.\nUri https://dev.azure.com/bartekr/CrossProjectTest01/_apis/wit/workItems/26/comments?api-version=6.0-preview.3 AccessToken: *** StatusCode : 200 StatusDescription : OK Content : {\u0026#34;totalCount\u0026#34;:1,\u0026#34;count\u0026#34;:1,\u0026#34;comments\u0026#34;:[{\u0026#34;workItemId\u0026#34;:26,\u0026#34;id\u0026#34;:2806326,\u0026#34;version\u0026#34;:1,\u0026#34;text\u0026#34;:\u0026#34; This worked when the pipeline and the work item were in the same project. When I set the parameters for the PowerShell script to call the task from another project (CrossProjectTest02) I got the error \u0026ldquo;project with id (\u0026hellip;) does not exist, or you do not have permission to access it\u0026rdquo;:\nUri https://dev.azure.com/bartekr/CrossProjectTest02/_apis/wit/workItems/27/comments?api-version=6.0-preview.3 AccessToken: *** Invoke-WebRequest : {\u0026#34;$id\u0026#34;:\u0026#34;1\u0026#34;,\u0026#34;innerException\u0026#34;:null,\u0026#34;message\u0026#34;:\u0026#34;VS800075: The project with id \u0026#39;vstfs:///Classification/TeamProject/8634ba98-f13c-46c4-a30c-eedc6dbeb32b\u0026#39; does not exist, or you do not have permission to access it.\u0026#34;,\u0026#34;typeName\u0026#34;:\u0026#34;Microsoft.TeamFoundation.Core.WebApi.ProjectDoesNotExistException, Microsoft.TeamFoundation.Core.WebApi\u0026#34;,\u0026#34;typeKey\u0026#34;:\u0026#34;ProjectDoesNotExistException\u0026#34;,\u0026#34;errorCode\u0026#34;:0,\u0026#34;eventId\u0026#34;:3000} It\u0026rsquo;s because I have \u0026ldquo;Limit job authorization scope to current project\u0026rdquo; both for release and non-release pipelines set to ON (see: Project settings \u0026gt; Pipelines \u0026gt; Settings)\nAfter setting the \u0026ldquo;Limit job authorization scope to current project for non-release pipelines\u0026rdquo; to OFF (in CrossProject01, as there is my pipeline) everything works. It\u0026rsquo;s enough for me to set off for non-release because I\u0026rsquo;m working on the build pipeline.\nBut\u0026hellip; but I wanted the YAML pipeline to work\rThe crucial part for the YAML pipeline is: \u0026ldquo;In YAML, you must explicitly map System.AccessToken into the pipeline using a variable.\u0026rdquo;\nsteps: - task: PowerShell@2 displayName: \u0026#39;Get comments\u0026#39; inputs: filePath: Get-WorkItemComments.ps1 env: SYSTEM_ACCESSTOKEN: $(System.AccessToken) Note to self: ALWAYS pay attention to what you write in the env section of the PowerShell task. $(Aystem.AccesToken) looks similar but does not work.\nSo, the things to remember for the future:\nIn the classic pipeline \u0026ldquo;Allow scripts to access the OAuth token\u0026rdquo; In the YAML pipeline map the $(System.AccessToken) to the variable if you want to use it in the script Disable the \u0026ldquo;Limit job authorization scope to current project\u0026rdquo; options to access different projects than the current one ",
"ref": "/2020/09/24/using-the-system-oauth-token-in-azure-devops/"
},{
"title": "Have problems with Docker Desktop for Windows Home? Yeah, me too!",
"date": "",
"description": "",
"body": "Last week finally came that day: \u0026ldquo;I will give a shot to a Docker Desktop for Windows 10 Home\u0026rdquo;. To make it possible I run a long-postponed Windows Update to version 2004 (it allows to use WSL2). The upgrade and installation went smoothly, but the docker refused to cooperate.\nWell, that was unexpected. Up to now, I worked with docker mostly on Ubuntu 18.04 and sometimes Windows Server 2016 / Windows 10 Professional. Also at home, I prepared a working environment consisting of Windows 10 Home + WSL + VirtualBox. The last one for hosting an Ubuntu virtual machine, as it\u0026rsquo;s not possible to run Docker Desktop on Windows 10 Home or run a docker daemon in WSL. I configured it with a very helpful posts from Nick Janetakis (https://nickjanetakis.com/blog/setting-up-docker-for-windows-and-wsl-to-work-flawlessly, https://nickjanetakis.com/blog/docker-tip-73-connecting-to-a-remote-docker-daemon). Now, with WSL2 I could eliminate VirtualBox from the ecosystem.\nAfter the installation and some restarts I opened the PowerShell, wrote docker version and got the message: \u0026ldquo;unable to resolve docker endpoint: open C:\\Users\\\\.docker\\machine\\machines\\default\\ca.pem: The system cannot find the path specified\u0026rdquo;\nThe problem might occur because I was testing a Docker Toolbox a long time ago, and the uninstaller might not delete all the settings. The solution: change the DOCKER_CERT_PATH environment variable. To list all docker related variables I run this in PowerShell:\nGet-ChildItem env: | Where-Object {$_.name -like \u0026#39;DOCKER\\*\u0026#39;} Name Value ---- ----- DOCKER_CERT_PATH C:\\\\Users\\\\BartekR\\\\.docker\\\\machine\\\\machines\\\\default DOCKER_HOST tcp://192.168.99.100:2376 DOCKER_MACHINE_NAME default DOCKER_TLS_VERIFY 1 The correct value is (...)\\.docker\\machine\\certs, so I changed to C:\\Users\\BartekR\\.docker\\machine\\certs.\nExcellent. Again: docker version: \u0026ldquo;error during connect: Post http://192.168.99.100:2376/v1.40/containers/create: dial tcp [::1]:2376: connectex: No connection could be made because the target machine actively refused it\u0026rdquo;\nAgain - most probably - Docker Toolbox (see: https://docs.docker.com/toolbox/faqs/troubleshoot/) - it uses the (...).100:2376 theme for the docker daemon. So I changed the DOCKER_HOST address to tcp://localhost:2375 and now it should work.\nShould it?\ndocker version: \u0026ldquo;error during connect: Get https://localhost:2375/v1.40/containers/json: http: server gave HTTP response to HTTPS client\u0026rdquo;\nWhat now? On Windows, you have to check the checkbox \u0026ldquo;Expose daemon on \u0026lt;DOCKER_HOST\u0026gt; without TLS\u0026rdquo;\nYeah, that \u0026ldquo;vulnerable to remote code execution attacks\u0026rdquo; does not encourage to check it. After making the change you have to restart Docker Desktop.\nNow, is it all?\nNo. One last thing *). If the switching does not help (and it did not in my case) the problem is in the DOCKER_TLS_VERIFY environment variable. I had it set to 1 (meaning: enabled), and I had to remove it.\nThen - finally - it started working.\nMy final DOCKER_* environment variables setup:\nName Value ---- ----- DOCKER_CERT_PATH C:\\\\Users\\\\BartekR\\\\.docker\\\\machine\\\\certs DOCKER_HOST tcp://localhost:2375 DOCKER_MACHINE_NAME default Now I can also work from VSCode + docker extension.\n*) I\u0026rsquo;ve read, that sometimes changing the DOCKER_HOST to tcp://localhost:2375 did not help, but setting it to tcp://127.0.0.1:2375 did.\n",
"ref": "/2020/09/01/have-problems-with-docker-desktop-for-windows-home-yeah-me-too/"
},{
"title": "Adding a new task in TFS (Azure DevOps) using Excel",
"date": "",
"description": "",
"body": "In the previous post, I added the tasks to on-premises TFS using C#. This time I will add similar data using an Excel add-in. I will also learn how to accidentally remove the link from the task to the parent element.\nFirst things first - if you do not have Azure DevOps Office® Integration 2019 installed (you need it to work with TFS / Azure DevOps from Excel), then go to https://visualstudio.microsoft.com/downloads/ and pick it from the Other Tools and Frameworks section at the bottom of the page. Install, and you should then see the Team plugin in the Excel menu.\nThis time I will use my Azure DevOps collection bartekr and the AzureDevOps_APITests project. It uses a Basic process, but I also tested the process on an Agile workflow. To add the elements click New List and connect to the project. If it\u0026rsquo;s the first time you connect to Azure DevOps or TFS you will be prompted to set up the connection to the collection. After you connect pick the Input list from the options\nNow you should see an empty list with information about the connection. The list is ready and you could start filling the columns Title, Work Item Type, State, Reason, Assigned To (ID is read-only).\nBut - we want to add not only the tasks but also the connection to a parent element. In this case, tasks should be connected to the Issue (in a Basic process) or the User Story (an Agile process). To do it we have to work with the tree, not a flat list (notice: List type: Flat on the right side of the yellow header). Click on the list and you should see an enabled Add Tree Level button. Click it.\nIn the Convert to Tree List select Parent-Child (the default option)\n![Convert to tree](images/ExcelConvertToTreeList.png#center\nNow you should see the columns - Title 1 and Title 2 and the List type: tree. The first title is for the parent item, the second for the child.\nNow - I want to add the new tasks for the Issue 2\nIn the Team toolbar click Get Work Items, find the Issue 2 in the opened window, select it and click OK.\nThe spreadsheet looks like below:\nNothing unusual. Now - in the Title 2 column add new tasks with Work Item Type == Task and push the button Publish. And that\u0026rsquo;s it!\nIn Azure DevOps:\nIf I wanted to fill more columns (like Assigned To, Sprint, Estimated time and so on) - I click Choose columns button and add them to the list.\nOne more small thing for the end. Let\u0026rsquo;s say you have a lot of Tasks with one parent Issue / User Story. Do not delete them from the list \u0026ldquo;because you want to have a clean sheet with just the Issue / User Story\u0026rdquo;. Deleting the tasks from the list does not delete the tasks (of course), but it removes the Parent-Child hierarchy for the deleted elements.\nIf you repeat the steps to get the tasks for Issue 2 and delete them, you will see they lost the connection:\nIn short: adding work items to TFS / Azure DevOps using Excel is easy. The trick is to keep breathing to use the Add Tree Level button.\n",
"ref": "/2020/05/19/adding-a-new-task-in-tfs-azure-devops-using-excel/"
},{
"title": "Adding a new Task in TFS using C#",
"date": "",
"description": "",
"body": "An assignment: using data from the Excel file (sample data below) insert them into TFS (on-premises). Automatically. Start. You have three months from now. Or a few hours.\nThe original spreadsheet contains 16 records - the header and 15 tasks with 11 columns (skipped here, 8 is enough).\nWork item type Title Activity Area Path Assigned To Description Original Estimate Completed Work Task Title1 Requirements Proj1\\Team1 P Name1 Description 1 10 10 Task Title2 Maintenance Proj1\\Team1 P Name2 Description 2 5 5 Additionally, the tasks should be assigned to the designated User Story: http://tfsserveraddress/tfs/CollectionName/ProjectName/_workitems/edit/12345 and the Sprint \u0026quot;\\Current\\Sprint 3\u0026quot;.\nIn my project, we work with TFS Server 2015. To talk with it using C# I use the Microsoft.TeamFoundationServer.ExtendedClient package (stable version 16.153.0 at the time of writing). Not everything is possible to use for TFS 2015 (e.g. dashboard editing, as it was introduced for the later TFS API versions), but interacting with the WorkItems works.\nFor the prototype, I will convert the data from the spreadsheet to the JSON format, and read it as a text in the application. Why JSON? Because I can! Also, because I\u0026rsquo;m lazy and I will use JsonConvert.Deserialize() to prepare the DataSet. I like working with DataSets. I will write a separate class with a method that adds all WorkItems from the file\nSo, to begin with - File \u0026gt; New \u0026gt; Project / Visual C# / Console App (.Net Framework). I use .NET 4.6.1. Then in the Solution Explorer: right-click on the project, select Add \u0026gt; Class \u0026hellip; and create a new file Workitems.cs. OK, I have two files in the project (Program.cs and Workitems.cs), both contain only the template stuff. First - in the Main() method, I will set the TFS server address - where I want to insert the data, and the location of the source file.\nTo connect to the TFS server, I use the TfsTeamProjectCollectionFactory.GetTeamProjectCollection() method, where I provide only the Uri - address of the server, but with the collection name. The location of the file is a full path. So, the program itself is easy - the Main() method looks like below:\nusing Microsoft.TeamFoundation.Client; using System; (...) static void Main(string[] args) { // TFS Server address, with the collection name Uri tfsServerUri = new Uri(\u0026#34;http://tfsserveraddress/tfs/CollectionName\u0026#34;); // connect to TFS TfsTeamProjectCollection tfs = TfsTeamProjectCollectionFactory.GetTeamProjectCollection(tfsServerUri); //AddWorkItems(tfs); string filePath = @\u0026#34;C:\\tmp\\DataImport.json\u0026#34;; Workitems mwi = new Workitems(); mwi.AddFromFile(tfs, filePath); } I use no authentication methods, as TFS is on-premises and uses domain authentication. When I run the program, my credentials are magically passed to the TFS server during execution of the GetTeamProjectCollection() method.\nThe skeleton of the application is ready. Now, to the\nTFS WorkItem interaction\rTFS/Azure DevOps API is built around the REST API concept and uses clients to work with the different aspects of the ecosystem. A different client for the Builds, a different for the Releases, a different one for the WorkItems, and so on. For the WorkItems I will use WorkItemTrackingHttpClient (tfs is a previously acquired connection to the collection):\nWorkItemTrackingHttpClient witClient = tfs.GetClient\u0026lt;WorkItemTrackingHttpClient\u0026gt;(); To create or update a WorkItem I have to use the JsonPatchDocument(), which contains one or more JsonPatchOperation() elements. Each of JsonPatchOperations must contain at least three of four elements: Operation, Value, and Path. The fourth - optional - is From, but it\u0026rsquo;s used for the move/copy operations.\nThere\u0026rsquo;s a catch - in lots of the examples and tutorials I\u0026rsquo;ve seen they show one JsonPatchDocument containing one JsonPatchOperation - like:\nJsonPatchDocument patchDocument = new JsonPatchDocument(); patchDocument.Add( new JsonPatchOperation() { Operation = Operation.Add, Path = \u0026#34;/fields/System.Title\u0026#34;, Value = title } ); The WorkItem created using code like the above will contain only the title. If I want to add more information about the task, like iteration, who is assigned to the task, the number of hours planned and completed, etc. - I have to add the separate JsonPatchOperation for each one of them - like:\nJsonPatchDocument patchDocument = new JsonPatchDocument(); patchDocument.Add( new JsonPatchOperation() { Operation = Operation.Add, Path = \u0026#34;/fields/System.Title\u0026#34;, Value = \u0026#34;Title1\u0026#34; } ); patchDocument.Add( new JsonPatchOperation() { Operation = Operation.Add, Path = \u0026#34;/fields/System.Description\u0026#34;, Value = \u0026#34;Description 1\u0026#34; } ); or in one go:\nJsonPatchDocument patchDocument = new JsonPatchDocument( new JsonPatchOperation() { Operation = Operation.Add, Path = \u0026#34;/fields/System.Title\u0026#34;, Value = \u0026#34;Title1\u0026#34; }, new JsonPatchOperation() { Operation = Operation.Add, Path = \u0026#34;/fields/System.Description\u0026#34;, Value = \u0026#34;Description 1\u0026#34; } ); Great, I have a pattern I can use. Just one small thing - how can I find the field names? The two above - System.Description and System.Title are taken from the examples, but how can I find the name for CompletedWork? I can use several sources:\nTFS and Azure DevOps Server (only on-premises version) have an additional Warehouse mechanism - a database with dimensions and facts that has the data from all the collections on the server. The dbo.DimWorkitem table holds the columns with the names that correspond to the paths. The downside - not all the fields have to be in the warehouse. In the collection\u0026rsquo;s database, there is a table dbo.tbl_Fields with all the fields for the collection\rI can use Azure DevOps and an API link, like https://azuredevopsserver/Collection/_apis/wit/fields or https://azuredevopsserver/Collection/Project/_apis/wit/fields, take a look at all the entries and use the referenceName value; the link does not work in TFS, I had to use Azure DevOps / Azure DevOps Server\rIn the documentation, there\u0026rsquo;s an example that shows the return value of the created WorkItem, and there\u0026rsquo;s a whole list of field names. Having the names for the field path I can either modify the Excel file, so the header will contain the proper field names, or I can do some sort of mapping in the code (like Title -\u0026gt; System.Title). As it\u0026rsquo;s the prototype, I will use the former and change the file directly. Then I will save it as the CSV file - it\u0026rsquo;s still a prototype, so I will prepare the input data as I see them fit.\nThe Data0.csv file contains the source saved as the CSV file. After tweaking (column names as the field names) the source is saved as the Data1.csv. In Poland, CSV files have not a comma, but a semicolon as the separator. I use PowerShell to convert it to JSON:\nImport-Csv -Path C:\\\\tmp\\\\Data1.csv -Delimiter \u0026#34;;\u0026#34; | ConvertTo-Json | Set-Content C:\\\\tmp\\\\Data1.json The output looks like below (notice double backslash instead of one, as in the source):\n[ { \u0026#34;System.Title\u0026#34;: \u0026#34;Title1\u0026#34;, \u0026#34;System.Activity\u0026#34;: \u0026#34;Requirements\u0026#34;, \u0026#34;System.AreaPath\u0026#34;: \u0026#34;Proj1\\\\Team1\u0026#34;, \u0026#34;System.AssignedTo\u0026#34;: \u0026#34;P Name1\u0026#34;, \u0026#34;System.Description\u0026#34;: \u0026#34;Description 1\u0026#34;, \u0026#34;Microsoft.VSTS.Scheduling.OriginalEstimate\u0026#34;: \u0026#34;10\u0026#34;, \u0026#34;Microsoft.VSTS.Scheduling.CompletedWork\u0026#34;: \u0026#34;10\u0026#34;, \u0026#34;IterationPath\u0026#34;: \u0026#34;\\\\Current\\\\Sprint 3\u0026#34; }, { \u0026#34;System.Title\u0026#34;: \u0026#34;Title2\u0026#34;, \u0026#34;System.Activity\u0026#34;: \u0026#34;Maintenance\u0026#34;, \u0026#34;System.AreaPath\u0026#34;: \u0026#34;Projt1\\\\Team1\u0026#34;, \u0026#34;System.AssignedTo\u0026#34;: \u0026#34;P Name2\u0026#34;, \u0026#34;System.Description\u0026#34;: \u0026#34;Description 2\u0026#34;, \u0026#34;Microsoft.VSTS.Scheduling.OriginalEstimate\u0026#34;: \u0026#34;5\u0026#34;, \u0026#34;Microsoft.VSTS.Scheduling.CompletedWork\u0026#34;: \u0026#34;5\u0026#34;, \u0026#34;IterationPath\u0026#34;: \u0026#34;\\\\Current\\\\Sprint 3\u0026#34; } ] The input is ready, now to\nThe WorkItems creation\rI will write a method in the Workitems class:\npublic void AddFromFile(TfsTeamProjectCollection tfs, string filePath) It will use the WorkItemTrackingHttpClient mentioned above. I will also set some parameters as the variables\nWorkItemTrackingHttpClient witClient = tfs.GetClient\u0026lt;WorkItemTrackingHttpClient\u0026gt;(); string witType = \u0026#34;Task\u0026#34;; string projectName = \u0026#34;Project1\u0026#34;; I also wrote I will use JsonConvert.DeserializeObject() to convert JSON data to the DataSet, so I read the Data1.json. Also - I have to give a name to the dataset, so I concatenate it with \u0026lsquo;Tasks\u0026rsquo; and prepare one JSON element with an array of the entries:\nstring jsonTasksSrc = File.ReadAllText(filePath); string jsonTasks = \u0026#34;{\u0026#39;Tasks\u0026#39; : \u0026#34; + jsonTasksSrc + \u0026#34;}\u0026#34;; DataSet x = JsonConvert.DeserializeObject(jsonTasks); I have the DataSet, so I can read the DataTable, iterate over it, and prepare JsonPathOperations with the fields - each DataTable column contains names and values.\nforeach (DataRow r in x.Tables[\u0026#34;Tasks\u0026#34;].Rows) { JsonPatchDocument wit = new JsonPatchDocument(); foreach (DataColumn c in r.Table.Columns) { string sPath = \u0026#34;/fields/\u0026#34; + c.ColumnName; string sValue = r[c.ColumnName].ToString(); wit.Add(new JsonPatchOperation() { Path = sPath, Operation = Operation.Add, Value = sValue }); } } The last thing - adding the parent User Story. I found the answer by Marina on StackOverflow - use /relations/- as the path with the relation defined as Hierarchy-Reverse:\nstring relation = @\u0026#34;{ \u0026#39;rel\u0026#39; : \u0026#39;System.LinkTypes.Hierarchy-Reverse\u0026#39;, \u0026#39;url\u0026#39; : \u0026#39;http://tfsserveraddress/tfs/CollectionName/ProjectName/_workitems/edit/12345\u0026#39; }\u0026#34;; In this case - as it\u0026rsquo;s a JSON construct - the Value of the JsonPathOperation should be a JToken instead of just a string, so I have to cast it:\nwit.Add(new JsonPatchOperation() { Path = \u0026#34;/relations/-\u0026#34;, Operation = Operation.Add, Value = JToken.Parse(relation) }); Now I can create the WorkItem:\nWorkItem w = witClient.CreateWorkItemAsync(wit, projectName, witType).Result; The last thing - the WorkItem is created as New. I have to set it as closed because I report the finished tasks:\n_ = witClient.UpdateWorkItemAsync(p, w.Id.Value).Result; The underscore means that I\u0026rsquo;m not interested in the result (during the debugging process I checked that it works) so I discard it.\nAnd that\u0026rsquo;s it! It looks like it\u0026rsquo;s working.\nThe next steps - checking how it works with Azure DevOps in Azure and a bit more of parametrisation. Also, I have to verify if I have to pass a full URL for the parent WorkItem (User Story) and if it\u0026rsquo;s possible to insert the task with the Closed status (not possible form the GUI - the only option is New). But that\u0026rsquo;s a subject for a different blog post.\nThe code is available on GitHub.\n",
"ref": "/2020/05/04/adding-a-new-task-in-tfs-using-c/"
},{
"title": "Draw the SSIS Package using SVG - part III",
"date": "",
"description": "",
"body": "In the third part of a series, I focus on drawing the constraints\u0026rsquo; descriptions and the colours. And also a bit of PowerShell for automation.\nAutomating the layout extraction\rPreviously I prepared the file with a layout by hand - I copied the CDATA content of the/DTS:Executable/DTS:DesignTimeProperties element to the XML file and saved it. It\u0026rsquo;s a tedious task, so I wrote a PowerShell script New-Diagram.ps1. It has two paths as the parameters - the package to analyse, and the output file. The content is just three lines of code (it could fit in one, but I split it for readability):\n# Find \u0026lt;DesignTimeProperties\u0026gt;, CDATA section $xpath = \u0026#39;/DTS:Executable/DTS:DesignTimeProperties/text()\u0026#39; # We have a namespace, so add a declaration; just copy the values from the .dtsx file $namespace = @{DTS = \u0026#39;www.microsoft.com/SqlServer/Dts\u0026#39;} # The Command (Select-Xml -Path $packagePath -XPath $xpath -Namespace $namespace | Select-Object -ExpandProperty Node).Value | Out-File $outputPath Select-Xml gets the required information from the .dtsx package using an XPath expression and returns the Node. I take its Value and save to the file.\nAs an addition, I also wrote the diagram2svg.ps1 script to run everything from PowerShell console. The diagram2svg.bat version is still available.\nBack to drawing\rTo get more information, I created an extended version of the package. It has Completion and Failure constraints, as well as OR version. I also added more annotations.\nI know that I have to draw the descriptions if I see an \u0026lt;EdgeLayout.Labels\u0026gt; element within the \u0026lt;EdgeLayout\u0026gt;. It contains an empty tag \u0026lt;mssgm:EdgeLabel\u0026gt; with two attributes: @BoundingBox and @RelativePosition. Only the first is interesting (the second always has a fixed value Any) - it defines the area for the description. It looks like this:\n\u0026lt;EdgeLayout.Labels\u0026gt; \u0026lt;mssgm:EdgeLabel BoundingBox=\u0026#34;-176.548522135417,29.4736842105263,161.7637109375,16\u0026#34; RelativePosition=\u0026#34;Any\u0026#34; /\u0026gt; \u0026lt;/EdgeLayout.Labels\u0026gt; According to the documentation, the @BoundingBox contains \u0026ldquo;value that specifies the coordinates of the four vertices of the bounding box for the edge label\u0026rdquo;. It\u0026rsquo;s not. The numbers are x, y, width and height of the box, so I can draw it in SVG using a \u0026lt;rect\u0026gt; element.\nBut should I draw them? In the beginning - yes. Just like with the rectangle around the annotation area - to get used to the diagram structure and get the idea of what the values mean.\nOK. I know where I have to write the descriptions. Now it\u0026rsquo;s time to get what I have to write. Scrolling to the bottom of the Package.Diagram.xml I see only two pieces of information:\n\u0026lt;PrecedenceConstraint design-time-name=\u0026#34;Package\\\\SEQC MAIN\\\\FLC Tree.PrecedenceConstraints\\[Constraint\\]\u0026#34;\u0026gt; \u0026lt;ShowAnnotation\u0026gt;ConstraintOptions\u0026lt;/ShowAnnotation\u0026gt; \u0026lt;/PrecedenceConstraint\u0026gt; \u0026lt;PrecedenceConstraint design-time-name=\u0026#34;Package\\\\SEQC MAIN\\\\FLC Tree.PrecedenceConstraints\\[Constraint 1\\]\u0026#34;\u0026gt; \u0026lt;ShowAnnotation\u0026gt;ConstraintDescription\u0026lt;/ShowAnnotation\u0026gt; \u0026lt;/PrecedenceConstraint\u0026gt; The \u0026lt;ShowAnnotation\u0026gt; values come from the Properties of the precedence constraints. So, for the Package\\SEQC MAIN\\FLC Tree.PrecedenceConstraints[Constraint] constraint I need to write the ConstraintOptions, which means \u0026ldquo;automatically annotate using the values of the Value and Expression properties\u0026rdquo;.\nReading from the .dtsx file\rTo get the Value and Expression properties I need to find the precedence constraint in the .dtsx file during the XSL transformations. It requires three changes in the package2svg.xsl:\nI have to pass the name of the .dtsx file I have to read the XML from the .dtsx file I have to use the DTS namespace because it\u0026rsquo;s the namespace of the .dtsx file The Saxon XSLT processor has a nice feature - after all the switches for the Transform command I can set the parameters defined at the top level of the XSL file using the key=value pairs. So I define the \u0026lt;xsl:param name=\u0026quot;packagePath\u0026quot; as=\u0026quot;xs:string\u0026quot; required=\u0026quot;yes\u0026quot; /\u0026gt; within the XSL file and extend the command (lines split for readability) to pass the packagePath:\nTransform ` -s:Package.Diagram.xml ` -xsl:package2svg.xsl ` -o:Package.Diagram.svg ` packagePath=DTSX2SVG\\\\Package.dtsx To get the content of the .dtsx file I use the XSLT document() function:\n\u0026lt;xsl:variable name=\u0026#34;packageContent\u0026#34; select=\u0026#34;document($packagePath)\u0026#34; /\u0026gt; The last thing is to add the DTS namespace to the stylesheet declaration:\n\u0026lt;xsl:stylesheet version=\u0026#34;2.0\u0026#34; xmlns:xsl=\u0026#34;http://www.w3.org/1999/XSL/Transform\u0026#34; xmlns:xs=\u0026#34;http://www.w3.org/2001/XMLSchema\u0026#34; xmlns:gl=\u0026#34;clr-namespace:Microsoft.SqlServer.IntegrationServices.Designer.Model.Serialization;assembly=Microsoft.SqlServer.IntegrationServices.Graph\u0026#34; xmlns:mssgle=\u0026#34;clr-namespace:Microsoft.SqlServer.Graph.LayoutEngine;assembly=Microsoft.SqlServer.Graph\u0026#34; xmlns:mssgm=\u0026#34;clr-namespace:Microsoft.SqlServer.Graph.Model;assembly=Microsoft.SqlServer.Graph\u0026#34; xmlns:DTS=\u0026#34;www.microsoft.com/SqlServer/Dts\u0026#34; \u0026gt; To read the values from the .dtsx file I just use the packageContent variable as the starting point and pass the XPath, like $packageContent//DTS:PrecedenceConstraint[@DTS:refId=$localId]/@DTS:Expression.\nThe descriptions for the constraints are explicit (ShowAnnotation) or implicit (the SSIS defaults, like the annotation for Completion or Failure constraints). I define two variables: precedenceConstraintValue and value. The first contains the description based on the layout file information, the second based on the default SSIS behaviour. The algorithm to present the proper information uses the series of xsl:choose/xsl:when commands that read the $packageContent.\nUsing the same technique, I set the colour for the constraint reading the DTS:Value and the type of the constraint (OR/AND). For the latter I analyse the DTS:LogicalAnd attribute - if it exists I add the stroke-dasharray SVG attribute:\n\u0026lt;xsl:if test=\u0026#34;count($packageContent//DTS:PrecedenceConstraint[@DTS:refId=$localId]/@DTS:LogicalAnd) = 0\u0026#34;\u0026gt; \u0026lt;xsl:attributename=\u0026#34;stroke-dasharray\u0026#34;\u0026gt;5\u0026lt;/xsl:attribute\u0026gt; \u0026lt;/xsl:if\u0026gt; And that\u0026rsquo;s it! Now I have the package with the constraint descriptions, colours and shapes. There are still some uncovered parts, but I will deal with them later.\nUp to now, I drew the Control Flow. The next part will focus on the Data Flow and Event Handlers.\n",
"ref": "/2019/08/07/draw-the-ssis-package-using-svg-part-iii/"
},{
"title": "Draw the SSIS Package using SVG - part II",
"date": "",
"description": "",
"body": "The post is a second part of the series. In the previous one, I created an SVG image of the simple SSIS package, but when I tried to draw something advanced (upper image below) I got something far from expected (lower image below). This time I\u0026rsquo;ll fix it.\nThe sequence problem\rThe main issue is with the alignment of the elements in the Sequence object (no matter if it\u0026rsquo;s the default one, the ForEachLoop or the ForLoop). Let\u0026rsquo;s take a look at the Sequences.xml file. Here are the first three \u0026lt;ContainerLayout\u0026gt; elements and the layout of the package:\n\u0026lt;ContainerLayout HeaderHeight=\u0026#34;43\u0026#34; IsExpanded=\u0026#34;True\u0026#34; PanelSize=\u0026#34;205,55\u0026#34; Size=\u0026#34;205,98\u0026#34; Id=\u0026#34;Package\\SEQC MAIN\\SEQC 01\\SEQC 011\\SEQC 0111\u0026#34; TopLeft=\u0026#34;5.5,5.5\u0026#34;/\u0026gt; \u0026lt;ContainerLayout HeaderHeight=\u0026#34;43\u0026#34; IsExpanded=\u0026#34;True\u0026#34; PanelSize=\u0026#34;216,158\u0026#34; Size=\u0026#34;216,202\u0026#34; Id=\u0026#34;Package\\SEQC MAIN\\SEQC 01\\SEQC 011\u0026#34; TopLeft=\u0026#34;5.50000000000003,5.5\u0026#34;/\u0026gt; \u0026lt;ContainerLayout HeaderHeight=\u0026#34;43\u0026#34; IsExpanded=\u0026#34;True\u0026#34; PanelSize=\u0026#34;227,262\u0026#34; Size=\u0026#34;227,306\u0026#34; Id=\u0026#34;Package\\SEQC MAIN\\SEQC 01\u0026#34; TopLeft=\u0026#34;5.50000000000006,5.49999999999989\u0026#34;/\u0026gt; All of them have (almost) the same values of the TopLeft attribute - 5.5,5.5. They cannot be drawn in the same place, so that means that each sequence object - or better, each container - starts its own coordinates system (with the Package being the outermost container). I need to know which elements belong to which container. Challenge accepted.\nThe analysis\rTo make things harder, the layout of the sequences and tasks is not some nested XML structure. All of the elements have the same parent - \u0026lt;GraphLayout\u0026gt;, meaning all of them are at the same tree level. Also - there is no attribute showing where a particular object belongs. Almost. In the example with the sequences, I see two regularities:\nthe outer container is placed later in the XML, than the inner container the @Id attributes show the nesting of the objects Getting further with the Sequences.xml example I can see, that the first three elements have the identifiers:\nId=\u0026quot;Package\\SEQC MAIN\\SEQC 01\\SEQC 011\\SEQC 0111\u0026quot; Id=\u0026quot;Package\\SEQC MAIN\\SEQC 01\\SEQC 011\u0026quot; Id=\u0026quot;Package\\SEQC MAIN\\SEQC 01\u0026quot; At the end of the file, there\u0026rsquo;s also anId=\u0026quot;Package\\SEQC MAIN\u0026quot; element, that wraps everything. These four elements create a hierarchy of the containers I want to draw. First - the SEQC MAIN, then SEQC 01, SEQC 011, and SEQC 0111 (I skip SEQC 02 for a moment).\nSEQC MAIN has the @TopLeft coordinates set to 5.5, 5.5, which means a bit more than 5 pixels from the upper left corner of the package canvas. Then SEQC 01 also has @TopLeft=\u0026quot;5.5,5.5\u0026quot; coordinates but within the SEQC MAIN container. It means that to draw the SEQC 01 I have to move to the X coordinate 11 (5.5 from SEQC MAIN + 5.5 from SEQC 01), and the Y coordinate to - wait, not 11. The Y coordinate must also include the calculation of the @HeaderHeight attribute, so I move to 54 (5.5 from @SEQC MAIN + 5.5 from @SEQC 01 + 43 from SEQC MAIN/@HeaderHeight). So the calculated coordinates are (11, 54).\nFollowing the pattern:\nSEQC 011 has its start at 16.5,102.5 from the upper left corner of the package canvas: X = 5.5 (SEQC MAIN/@TopLeft) + 5.5 (@SEQC 01/@TopLeft) + 5.5 (@SEQC 011/@TopLeft) = 16.5, Y = 5.5 (SEQC MAIN/@TopLeft) + 5.5 (@SEQC 01/@TopLeft) + 5.5 (SEQC 011/@TopLeft) + 43 (SEQC MAIN/@HeaderHeight) + 43 (SEQC 01/@HeaderHeight) = 102.5 SEQC 0111 has its start at 22,151 X = 5.5 (SEQC MAIN/@TopLeft) + 5.5 (SEQC 01/@TopLeft) + 5.5 (SEQC 011/@TopLeft) + 5.5 (SEQC 0111/@TopLeft) = 22, Y = 5.5 (SEQC MAIN/@TopLeft) + 5.5 (SEQC 01/@TopLeft) + 5.5 (SEQC 011/@TopLeft) + 5.5 (SEQC 0111/@TopLeft) + 43 (SEQC MAIN/@HeaderHeight) + 43 (SEQC 01/@HeaderHeight) + 43 (SEQC 011/@HeaderHeight) = 151 Great. I know how it should look like using the paper, the pencil, and the head. Now I have to explain all of it to the computer.\nFind all the outer nodes\rThe biggest challenge for me was to find all the ancestor nodes, based on the @Id attribute of the current node. For example - when I work with the Id=\u0026quot;Package\\SEQC MAIN\\SEQC 01\\SEQC 011\\SEQC 0111\u0026quot;, I need to find the nodes with:\nId=\u0026quot;Package\\SEQC MAIN\\SEQC 01\\SEQC 011\u0026quot; Id=\u0026quot;Package\\SEQC MAIN\\SEQC 01\u0026quot; Id=\u0026quot;Package\\SEQC MAIN\u0026quot; Then I take the @TopLeft and @HeaderHeight attributes from all of them, to calculate the offset of the upper-left corner for the element I\u0026rsquo;m currently working with. The trick is to keep breathing to work with sequences. However, first - I have to add processing of the sequences to the XSLT file:\n\u0026lt;xsl:template match=\u0026#34;gl:GraphLayout\u0026#34;\u0026gt; \u0026lt;xsl:apply-templates select=\u0026#34;gl:ContainerLayout\u0026#34; /\u0026gt; \u0026lt;xsl:apply-templates select=\u0026#34;gl:NodeLayout\u0026#34; /\u0026gt; \u0026lt;xsl:apply-templates select=\u0026#34;gl:EdgeLayout\u0026#34; /\u0026gt; \u0026lt;xsl:apply-templates select=\u0026#34;gl:AnnotationLayout\u0026#34; /\u0026gt; \u0026lt;/xsl:template\u0026gt; \u0026lt;xsl:template match=\u0026#34;gl:ContainerLayout\u0026#34;\u0026gt; \u0026lt;/xsl:template match=\u0026#34;gl:ContainerLayout\u0026#34;\u0026gt; Then, inside of the gl:ContainerLayout template I calculate the tokens for the @Id of the container with the tokenize() function:\n\u0026lt;xsl:variable name=\u0026#34;IdTokens\u0026#34; select=\u0026#34;tokenize(@Id, \u0026#39;\\\\\u0026#39;)\u0026#34; /\u0026gt; The variable IdTokens has the sequence of the elements separated by the backslash, like for Id=\u0026quot;Package\\SEQC MAIN\\SEQC 01\\SEQC 011 I have the tokens: Package, SEQC MAIN, SEQC 01, SEQC 011 (in this order). You can think of it as some kind of array (but it\u0026rsquo;s not the array, it\u0026rsquo;s the sequence). Now I calculate the \u0026ldquo;subpaths\u0026rdquo; for Package\\SEQC MAIN\\SEQC 01\\SEQC 011\\SEQC 0111 and I will use the IdTokens variable. Again - I need to find the paths:\nPackage\\SEQC MAIN\\SEQC 01\\SEQC 011 Package\\SEQC MAIN\\SEQC 01 Package\\SEQC MAIN I don\u0026rsquo;t need the Package path, as it does not exist. So, I need to concatenate the elements from the sequence for a few times. To be precise: that many times as I have the number of the elements in the sequence. Minus one. For this, I use XPath\u0026rsquo;s for:\n\u0026lt;xsl:variable name=\u0026#34;IdPaths\u0026#34; select=\u0026#34;for $x in (2 to count($IdTokens)) return string-join(subsequence($IdTokens, 1, $x), \u0026#39;\\\\\u0026#39;)\u0026#34; /\u0026gt; It\u0026rsquo;s a bit hard to read at first, so I will split it to the bits and pieces using the example of Package\\SEQC MAIN\\SEQC 01\\SEQC 011\\SEQC 0111.\nTo concatenate the elements of the sequence I use the string-join() function. To point out the elements to concatenate I use the subsequence() function. It takes the source sequence (here - the $IdTokens variable) from the starting position to the ending position. To get the subpath Package\\SEQC MAIN\\SEQC 01 I need to get the first three elements - subsequence($IdTokens, 1, 3) - and concatenate them with a backslash - string-join(subsequence($IdTokens, 1, 3), '\\'). To get the subpath Package\\SEQC MAIN I need to get the first two elements - subsequence($IdTokens, 1, 2) - and concatenate them with a backslash - string-join(subsequence($IdTokens, 1, 2), '\\'). And so on, and so forth.\nBecause I want to do it in the loop, I use the for() function. But there\u0026rsquo;s a thing - for() in XPath works like foreach() in other languages, and if I used for $x in $IdTokens - each time $x would contain the token. But I don\u0026rsquo;t want the token - I want the position of the token. So I use the trick I found on the blog by Miguel de Melo: count the number of the elements (count($IdTokens)) and use the to operator, to generate the sequence of the numbers. The construction 1 to 10 returns 10 consecutive numbers from 1 to 10, so when I use for $x in (1 to count($IdTokens)) my $x will contain the numbers from 1 to the count of $IdTokens. In the example I use for $x in (2 to count($IdTokens)) - it\u0026rsquo;s because I don\u0026rsquo;t need the path with only the Package element. Now I have what I want: the $IdPaths variable will contain the sequence of the subpaths. It\u0026rsquo;s also in the comments of the SVG file - the upper line is the @Id attribute of the processed container, and the lower line contains all the @Id attributes of the containers I search for, separated by colons.\n\u0026lt;!--Package\\SEQC MAIN\\SEQC 01\\SEQC 011\\SEQC 0111--\u0026gt; \u0026lt;!--Package\\SEQC MAIN:Package\\SEQC MAIN\\SEQC 01:Package\\SEQC MAIN\\SEQC 01\\SEQC 011:Package\\SEQC MAIN\\SEQC 01\\SEQC 011\\SEQC 0111--\u0026gt; OK. I have the paths of the containers (nodes) that are around the container I want to draw. The paths are the values of the @Id attribute of these nodes. I find them with an XPath expression and - again - use the sequence (in a variable), to store the nodes. There is the next trick: I use the sequence of paths (stored in the $IdPaths variable) to get only the nodes I want. And because the outer elements are later in the layout, I can specify following-sibling as the axis. I also set the variable as the nodes collection, hence as=\u0026quot;nodes()\u0026quot;.\n\u0026lt;xsl:variable name=\u0026#34;paths\u0026#34; as=\u0026#34;node()\\*\u0026#34;\u0026gt; \u0026lt;xsl:sequence select=\u0026#34;following-sibling::gl:ContainerLayout[@Id=$IdPaths]\u0026#34; /\u0026gt; \u0026lt;/xsl:variable\u0026gt; Getting the offsets\rAlmost ready. I have the nodes, so I can start the calculations. I use four variables to determine the correct coordinates. x0 and y0 are the values from the @TopLeft attribute of currently processed \u0026lt;ContainerLayout\u0026gt; node. x and y variables are the calculated values for the upper-left corner of the container.\n\u0026lt;xsl:variable name=\u0026#34;x0\u0026#34; select=\u0026#34;number(substring-before(@TopLeft, \u0026#39;,\u0026#39;))\u0026#34; /\u0026gt; \u0026lt;xsl:variable name=\u0026#34;y0\u0026#34; select=\u0026#34;number(substring-after(@TopLeft, \u0026#39;,\u0026#39;))\u0026#34; /\u0026gt; \u0026lt;xsl:variable name=\u0026#34;x\u0026#34; select=\u0026#34;sum( for $p in $paths return number(substring-before($p/@TopLeft, \u0026#39;,\u0026#39;))) + $x0\u0026#34; /\u0026gt; \u0026lt;xsl:variable name=\u0026#34;y\u0026#34; select=\u0026#34;sum( for $p in $paths return number(substring-after($p/@TopLeft, \u0026#39;,\u0026#39;)) + number($p/@HeaderHeight)) + $y0\u0026#34; /\u0026gt; To get x I analyse each node I found with the algorithm described earlier, get the @TopLeft attribute to calculate the x position (just like in x0, but for outer containers). And I make a sum of all the values. The same for y, but I remember that I have to add @HeaderHeight for each outer container. The final values are completed with x0 and y0 respectively (because for() operates on the outer containers).\nAnd that\u0026rsquo;s it! I get the same results as I calculated on the paper! Woah!\nWell, not yet.\n\u0026lt;ContainerLayout\u0026gt; is only the beginning\rI have to use the same calculation for each element inside the container. For now, I only know the values for the \u0026lt;ContainerLayout\u0026gt; node, and the container can have the elements inside. So I repeat the code to calculate x and y for each subsequent element - \u0026lt;NodeLayout\u0026gt;, \u0026lt;EdgeLayout\u0026gt;, \u0026lt;AnnotationLayout\u0026gt;.\nDrawing the objects\rNow I use the x and y as the values for the \u0026lt;rect\u0026gt; and \u0026lt;text\u0026gt; objects. I put them inside the number() function, because sometimes I use it in the equations and I wanted to be consistent when working with x and y.\n\u0026lt;xsl:attribute name=\u0026#34;x\u0026#34;\u0026gt;\u0026lt;xsl:value-of select=\u0026#34;number($x)\u0026#34;/\u0026gt;\u0026lt;/xsl:attribute\u0026gt; \u0026lt;xsl:attribute name=\u0026#34;y\u0026#34;\u0026gt;\u0026lt;xsl:value-of select=\u0026#34;number($y)\u0026#34;/\u0026gt;\u0026lt;/xsl:attribute\u0026gt; To draw the header of the container I also use the \u0026lt;rect\u0026gt;, but with the height of the @HeaderHeight:\n\u0026lt;rect\u0026gt; \u0026lt;xsl:attribute name=\u0026#34;x\u0026#34;\u0026gt;\u0026lt;xsl:value-of select=\u0026#34;number($x)\u0026#34;/\u0026gt;\u0026lt;/xsl:attribute\u0026gt; \u0026lt;xsl:attribute name=\u0026#34;y\u0026#34;\u0026gt;\u0026lt;xsl:value-of select=\u0026#34;number($y)\u0026#34;/\u0026gt;\u0026lt;/xsl:attribute\u0026gt; \u0026lt;xsl:attribute name=\u0026#34;rx\u0026#34;\u0026gt;3\u0026lt;/xsl:attribute\u0026gt; \u0026lt;xsl:attribute name=\u0026#34;ry\u0026#34;\u0026gt;3\u0026lt;/xsl:attribute\u0026gt; \u0026lt;xsl:attribute name=\u0026#34;width\u0026#34;\u0026gt;\u0026lt;xsl:value-of select=\u0026#34;substring-before(@Size, \u0026#39;,\u0026#39;)\u0026#34;/\u0026gt;\u0026lt;/xsl:attribute\u0026gt; \u0026lt;xsl:attribute name=\u0026#34;height\u0026#34;\u0026gt;\u0026lt;xsl:value-of select=\u0026#34;@HeaderHeight\u0026#34;/\u0026gt;\u0026lt;/xsl:attribute\u0026gt; \u0026lt;xsl:attribute name=\u0026#34;fill\u0026#34;\u0026gt;white\u0026lt;/xsl:attribute\u0026gt; \u0026lt;xsl:attribute name=\u0026#34;stroke\u0026#34;\u0026gt;green\u0026lt;/xsl:attribute\u0026gt; \u0026lt;xsl:attribute name=\u0026#34;stroke-width\u0026#34;\u0026gt;1\u0026lt;/xsl:attribute\u0026gt; \u0026lt;/rect\u0026gt; The last part is to draw the fancy element connectors, instead of just straight lines. For that, I leave the Sequences.xml and go back to the original Sample2.xml.\n\u0026lt;xsl:template match=\u0026ldquo;gl:EdgeLayout\u0026rdquo;\u0026gt;\rIt looks complicated, but it isn\u0026rsquo;t. First - the @Id attribute of the \u0026lt;EdgeLayout\u0026gt; includes the PrecedenceConstraint name, like Package\\SEQC MAIN\\FLC Tree.PrecedenceConstraints[Constraint 1]. I get rid of it (because it hinders finding the outer containers), and then tokenize() the output:\n\u0026lt;xsl:variable name=\u0026#34;ParsedId\u0026#34; select=\u0026#34;substring-before(@Id, \u0026#39;.PrecedenceConstraints\u0026#39;)\u0026#34; /\u0026gt; \u0026lt;xsl:variable name=\u0026#34;IdTokens\u0026#34; select=\u0026#34;tokenize($ParsedId, \u0026#39;\\\\\u0026#39;)\u0026#34; /\u0026gt; The edges can be stored in two ways - with \u0026lt;CubicBezierSegment\u0026gt;s, or without - we have either one \u0026lt;LineSegment\u0026gt; or three \u0026lt;LineSegment\u0026gt;s interlaced with two \u0026lt;CubicBezierSegment\u0026gt;s.\n\u0026lt;EdgeLayout.Curve\u0026gt; \u0026lt;mssgle:Curve\u0026gt; \u0026lt;mssgle:Curve.Segments\u0026gt; \u0026lt;mssgle:SegmentCollection\u0026gt; \u0026lt;mssgle:LineSegment /\u0026gt; \u0026lt;/mssgle:SegmentCollection\u0026gt; \u0026lt;/mssgle:Curve.Segments\u0026gt; \u0026lt;/mssgle:Curve\u0026gt; \u0026lt;/EdgeLayout.Curve\u0026gt; \u0026lt;EdgeLayout.Curve\u0026gt; \u0026lt;mssgle:Curve\u0026gt; \u0026lt;mssgle:Curve.Segments\u0026gt; \u0026lt;mssgle:SegmentCollection\u0026gt; \u0026lt;mssgle:LineSegment /\u0026gt; \u0026lt;mssgle:CubicBezierSegment /\u0026gt; \u0026lt;mssgle:LineSegment /\u0026gt; \u0026lt;mssgle:CubicBezierSegment /\u0026gt; \u0026lt;mssgle:LineSegment /\u0026gt; \u0026lt;/mssgle:SegmentCollection\u0026gt; \u0026lt;/mssgle:Curve.Segments\u0026gt; \u0026lt;/mssgle:Curve\u0026gt; \u0026lt;/EdgeLayout.Curve\u0026gt; The first option is something I used in the previous post. The only change is to use calculated x and y values. The second requires a bit more work because subsequent elements are connected. I use \u0026lt;xsl:choose\u0026gt; and count the occurrences of \u0026lt;CubicBezierSegment\u0026gt;s to decide what kind of the connector I have to draw:\n\u0026lt;xsl:choose\u0026gt; \u0026lt;xsl:when test=\u0026#34;count(gl:EdgeLayout.Curve/mssgle:Curve/mssgle:Curve.Segments/mssgle:SegmentCollection/mssgle:CubicBezierSegment) gt 0\u0026#34;\u0026gt; \u0026lt;!-- arc option --\u0026gt; \u0026lt;/xsl:when\u0026gt; \u0026lt;xsl:otherwise\u0026gt; \u0026lt;!-- straight line option --\u0026gt; \u0026lt;/xsl:otherwise\u0026gt; \u0026lt;/xsl:choose\u0026gt; The arc option consists of five elements drawn one after another. First line is drawn based on the gl:EdgeLayout.Curve/mssgle:Curve/mssgle:Curve.Segments/mssgle:SegmentCollection/mssgle:LineSegment[1]. Then the arc based on gl:EdgeLayout.Curve/mssgle:Curve/mssgle:Curve.Segments/mssgle:SegmentCollection/mssgle:CubicBezierSegment[1] and so on ([2] and [3]). To better understand the calculations take a look at the example below.\nThe important part is: to calculate the coordinates the values are relative to the @TopLeft attribute of the \u0026lt;EdgeLayout\u0026gt;. The second thing - \u0026lt;Line\u0026gt;s have only @End, so to calculate the beginning I have to use the end of the previous element. The third thing - the layout uses \u0026lt;CubicBezierSegment\u0026gt;, but to draw it in SVG I use a quadratic Bezier curve. It\u0026rsquo;s because the segment is built using three points: Point1 is the start, Point2 is the control point of the curve, and Point3 is the end of the curve. In SVG, I would need four points to draw the cubic bezier curve. The example below shows how to draw the first curve. I use the \u0026lt;path\u0026gt; element, where all drawing is set within the d attribute. First, I move (M) the pen to the beginning of the curve, then I draw the Quadratic Bezier curve (using Q, not q as I provide the absolute coordinates).\n\u0026lt;path\u0026gt; \u0026lt;xsl:attribute name=\u0026#34;d\u0026#34;\u0026gt; \u0026lt;xsl:value-of select=\u0026#34;concat(\u0026#39;M\u0026#39;, number($x) + number(substring-before(gl:EdgeLayout.Curve/mssgle:Curve/mssgle:Curve.Segments/mssgle:SegmentCollection/mssgle:CubicBezierSegment[1]/@Point1, \u0026#39;,\u0026#39;)), \u0026#39;,\u0026#39;, number($y) + number(substring-after(gl:EdgeLayout.Curve/mssgle:Curve/mssgle:Curve.Segments/mssgle:SegmentCollection/mssgle:CubicBezierSegment[1]/@Point1, \u0026#39;,\u0026#39;)))\u0026#34;/\u0026gt; \u0026lt;xsl:text\u0026gt; \u0026lt;/xsl:text\u0026gt; \u0026lt;xsl:value-of select=\u0026#34;concat(\u0026#39;Q\u0026#39;, number($x) + number(substring-before(gl:EdgeLayout.Curve/mssgle:Curve/mssgle:Curve.Segments/mssgle:SegmentCollection/mssgle:CubicBezierSegment[1]/@Point2, \u0026#39;,\u0026#39;)), \u0026#39;,\u0026#39;, number($y) + number(substring-after(gl:EdgeLayout.Curve/mssgle:Curve/mssgle:Curve.Segments/mssgle:SegmentCollection/mssgle:CubicBezierSegment[1]/@Point2, \u0026#39;,\u0026#39;)))\u0026#34;/\u0026gt; \u0026lt;xsl:text\u0026gt; \u0026lt;/xsl:text\u0026gt; \u0026lt;xsl:value-of select=\u0026#34;concat(number($x) + number(substring-before(gl:EdgeLayout.Curve/mssgle:Curve/mssgle:Curve.Segments/mssgle:SegmentCollection/mssgle:CubicBezierSegment[1]/@Point3, \u0026#39;,\u0026#39;)), \u0026#39;,\u0026#39;, number($y) + number(substring-after(gl:EdgeLayout.Curve/mssgle:Curve/mssgle:Curve.Segments/mssgle:SegmentCollection/mssgle:CubicBezierSegment[1]/@Point3, \u0026#39;,\u0026#39;)))\u0026#34;/\u0026gt; \u0026lt;/xsl:attribute\u0026gt; \u0026lt;xsl:attribute name=\u0026#34;style\u0026#34;\u0026gt;stroke:#006600; fill:none\u0026lt;/xsl:attribute\u0026gt; \u0026lt;/path\u0026gt; Repeat the pattern for the remaining elements and voila, I have the fancy, curved connectors.\nIt starts to look like an original package. It still lacks some elements (like path annotations), yet it is almost what I wanted. Further polishing will be a part three of the series.\n",
"ref": "/2019/07/16/draw-the-ssis-package-using-svg-part-ii/"
},{
"title": "Draw the SSIS package using SVG - part I",
"date": "",
"description": "",
"body": "For one of my projects, I need to draw the content of an SSIS package. It should not be a big problem, as the file contains all the required information. If you need to do something similar - I write a series of posts on how to achieve it using SVG, XSLT transformations and a bit of PowerShell (and maybe something more along the way). All the code is available on GitHub.\nThe setup\rI start with the sample package as on the picture. It has three elements aligned vertically, with AND precedence constraints, evaluated as Success. It also has a small annotation on the side.\nThe part responsible for the layout is the tag\u0026lt;DTS:DesignTimeProperties\u0026gt; at the end of the XML, that stores the data as a CDATA section.\nFor a start, it\u0026rsquo;s enough to save its content to a separate XML file named PackageLayout.xml and take only one part of it containing the \u0026lt;Package\u0026gt; element and save as Beginning.xml.\nThe last thing is to have some XSLT processor. I use Saxon for years, and for the demos, I use the Saxon HE 9.9 .NET version. I will use some of the features of XSLT 2.0, so I skip the XSLT processing used in the browsers as they don\u0026rsquo;t support it.\nTo run the XSLT transformation, I use Transform.exe with three parameters:\n-s:\u0026lt;source.xml.file\u0026gt; -xsl:\u0026lt;xsl.transformations.file\u0026gt; -o:\u0026lt;generated.svg.file\u0026gt; To ease the process, I create the diagram2svg.bat file, where I provide the path to the Saxon executables (set SAXONPATH=D:\\tools\\SaxonHE9.9N\\bin) and all the commands to produce SVG files, like:\n%SAXONPATH%\\Transform -s:Sample2.xml -xsl:package2svg.xsl -o:Sample2.svg The beginnings\rWhen I started to prepare the XSLT transformations, I used the Beginning.xml file and prepared the package2svg.xsl file. But not everything wanted to work, so I took a few steps back and started slowly. First - I created a blank SVG file - 00.svg - just to be sure that I remember how to draw a simple image. Then I took the \u0026lt;NodeLayout\u0026gt; element and put it directly inside the \u0026lt;Objects\u0026gt; to test a simple XSLT transformation. I tested it with the 01.xml and 01.xsl files.\n\u0026lt;?xml version=\u0026#34;1.0\u0026#34;?\u0026gt; \u0026lt;!-- First try: pretend, that NodeLayout is the first element under the root --\u0026gt; \u0026lt;Objects Version=\u0026#34;sql11\u0026#34;\u0026gt; \u0026lt;NodeLayout Size=\u0026#34;179,42\u0026#34; Id=\u0026#34;Package\\\\SQL Create table test\u0026#34; TopLeft=\u0026#34;5.50000000000003,5.5\u0026#34; /\u0026gt; \u0026lt;/Objects\u0026gt; XSLT processing is as follows:\nxsl:stylesheet is 1.0 version (for a start it\u0026rsquo;s enough) xsl:output is indented XML xsl:template match=\u0026quot;/Objects\u0026quot; finds the root element (Objects) of the XML file and prepares the root element of the SVG file; I make the SVG 2.0 version, so I use just the minimum declaration inside the root element I xsl:apply-templates for the NodeLayout (it\u0026rsquo;s just below the Objects) to draw the node I use the \u0026lt;rect\u0026gt; tag and pass all its attributes as xsl:attribute, because I want to evaluate the expressions the x and y coordinates are stored inside the TopLeft attribute, and the width and heightare inside the Size; they are separated by a comma, so I use substring-before() and substring-after() XSLT functions to get them; the SSIS package starts its coordinate system just like the SVG in the upper left corner of the screen, so I don\u0026rsquo;t have to do sophisticated calculations I draw the node using light grey colour (fill) with a black border (stroke) of 1px width (stroke-width) and round the corners using 10px measure (rx and ry); a side note - if you don\u0026rsquo;t specify the measure units in SVG it\u0026rsquo;s assumed to be pixels I have a habit that all the SVG elements created with XSLT are grouped using the \u0026lt;g\u0026gt; tag, even if the group contains just one object; it also helps me to deal with the namespaces the essential part of the template is to use the proper namespace, that\u0026rsquo;s why the full element is \u0026lt;g **xmlns=\u0026quot;http://www.w3.org/2000/svg\u0026quot;**\u0026gt; - if I don\u0026rsquo;t use the namespace, the SVG file will have \u0026lt;g xmlns=\u0026quot;\u0026quot;\u0026gt; as the outcome and will not draw the node the \u0026lt;g\u0026gt; tag as a wrapper also has an advantage - I specify the namespace only for the one element (in another case I would have to write it for all of them within the xsl:template) \u0026lt;xsl:stylesheet version=\u0026#34;1.0\u0026#34; xmlns:xsl=\u0026#34;http://www.w3.org/1999/XSL/Transform\u0026#34; \u0026gt; \u0026lt;xsl:output method=\u0026#34;xml\u0026#34; encoding=\u0026#34;UTF-8\u0026#34; indent=\u0026#34;yes\u0026#34; /\u0026gt; \u0026lt;xsl:template match=\u0026#34;/Objects\u0026#34;\u0026gt; \u0026lt;svg xmlns=\u0026#34;http://www.w3.org/2000/svg\u0026#34; viewBox = \u0026#34;0 0 1000 600\u0026#34; \u0026gt; \u0026lt;xsl:apply-templates select=\u0026#34;NodeLayout\u0026#34; /\u0026gt; \u0026lt;/svg\u0026gt; \u0026lt;/xsl:template\u0026gt; \u0026lt;xsl:template match=\u0026#34;NodeLayout\u0026#34;\u0026gt; \u0026lt;g xmlns=\u0026#34;http://www.w3.org/2000/svg\u0026#34;\u0026gt; \u0026lt;rect\u0026gt; \u0026lt;xsl:attribute name=\u0026#34;x\u0026#34;\u0026gt;\u0026lt;xsl:value-of select=\u0026#34;substring-before(@TopLeft, \u0026#39;,\u0026#39;)\u0026#34;/\u0026gt;\u0026lt;/xsl:attribute\u0026gt; \u0026lt;xsl:attribute name=\u0026#34;y\u0026#34;\u0026gt;\u0026lt;xsl:value-of select=\u0026#34;substring-after(@TopLeft, \u0026#39;,\u0026#39;)\u0026#34;/\u0026gt;\u0026lt;/xsl:attribute\u0026gt; \u0026lt;xsl:attribute name=\u0026#34;rx\u0026#34;\u0026gt;10\u0026lt;/xsl:attribute\u0026gt; \u0026lt;xsl:attribute name=\u0026#34;ry\u0026#34;\u0026gt;10\u0026lt;/xsl:attribute\u0026gt; \u0026lt;xsl:attribute name=\u0026#34;width\u0026#34;\u0026gt;\u0026lt;xsl:value-of select=\u0026#34;substring-before(@Size, \u0026#39;,\u0026#39;)\u0026#34;/\u0026gt;\u0026lt;/xsl:attribute\u0026gt; \u0026lt;xsl:attribute name=\u0026#34;height\u0026#34;\u0026gt;\u0026lt;xsl:value-of select=\u0026#34;substring-after(@Size, \u0026#39;,\u0026#39;)\u0026#34;/\u0026gt;\u0026lt;/xsl:attribute\u0026gt; \u0026lt;xsl:attribute name=\u0026#34;fill\u0026#34;\u0026gt;lightgray\u0026lt;/xsl:attribute\u0026gt; \u0026lt;xsl:attribute name=\u0026#34;stroke\u0026#34;\u0026gt;black\u0026lt;/xsl:attribute\u0026gt; \u0026lt;xsl:attribute name=\u0026#34;stroke-width\u0026#34;\u0026gt;1\u0026lt;/xsl:attribute\u0026gt; \u0026lt;/rect\u0026gt; \u0026lt;/g\u0026gt; \u0026lt;/xsl:template\u0026gt; \u0026lt;/xsl:stylesheet\u0026gt; The next step\rI can draw the node from the XML document with one level of nesting. But in reality, it has four levels: /Objects/Package/GraphInfo/LayoutInfo/NodeLayout with a \u0026lt;GraphLayout\u0026gt; tag having different namespace(s). Working with namespaces is hard in the beginning, so the next step is to draw the same node, but with four levels of nesting. The example is in the 02.xml and 02.xsl files.\n\u0026lt;?xml version=\u0026#34;1.0\u0026#34;?\u0026gt; \u0026lt;Objects Version=\u0026#34;sql11\u0026#34;\u0026gt; \u0026lt;Package\u0026gt; \u0026lt;LayoutInfo\u0026gt; \u0026lt;GraphLayout\u0026gt; \u0026lt;NodeLayout Size=\u0026#34;179,42\u0026#34; Id=\u0026#34;Package\\\\SQL Create table test\u0026#34; TopLeft=\u0026#34;5.50000000000003,5.5\u0026#34; /\u0026gt; \u0026lt;/GraphLayout\u0026gt; \u0026lt;/LayoutInfo\u0026gt; \u0026lt;/Package\u0026gt; \u0026lt;/Objects\u0026gt; The 02.xsl transformation file has two differences when compared to 01.xsl:\nthe root node now uses xsl:apply-templates to call the Packagetemplate all the descendant nodes call the templates of the child nodes - Package calls LayoutInfo, LayoutInfo calls GraphLayout, and GraphLayout calls NodeLayout the same result I can achieve using xsl:apply-templates=\u0026quot;Package/LayoutInfo/GraphLayout/NodeLayout and xsl:template-match=\u0026quot;Package/LayoutInfo/GraphLayout/NodeLayout\u0026quot; (as in 02a.xsl file), but I may want to decorate the intermediate elements, so I use this construction (...) \u0026lt;xsl:apply-templates select=\u0026#34;Package\u0026#34; /\u0026gt; (...) \u0026lt;xsl:template match=\u0026#34;Package\u0026#34;\u0026gt; \u0026lt;xsl:apply-templates select=\u0026#34;LayoutInfo\u0026#34; /\u0026gt; \u0026lt;/xsl:template\u0026gt; \u0026lt;xsl:template match=\u0026#34;LayoutInfo\u0026#34;\u0026gt; \u0026lt;xsl:apply-templates select=\u0026#34;GraphLayout\u0026#34; /\u0026gt; \u0026lt;/xsl:template\u0026gt; \u0026lt;xsl:template match=\u0026#34;GraphLayout\u0026#34;\u0026gt; \u0026lt;xsl:apply-templates select=\u0026#34;NodeLayout\u0026#34; /\u0026gt; \u0026lt;/xsl:template\u0026gt; Now to the namespace part. The \u0026lt;GraphLayout\u0026gt;item looks like this (skipping the Capacity attribute):\n\u0026lt;GraphLayout xmlns=\u0026#34;clr-namespace:Microsoft.SqlServer.IntegrationServices.Designer.Model.Serialization;assembly=Microsoft.SqlServer.IntegrationServices.Graph\u0026#34; xmlns:mssgle=\u0026#34;clr-namespace:Microsoft.SqlServer.Graph.LayoutEngine;assembly=Microsoft.SqlServer.Graph\u0026#34; xmlns:assembly=\u0026#34;http://schemas.microsoft.com/winfx/2006/xaml\u0026#34; \u0026gt; To generate the correct SVG, I have to do two things:\nadd the namespace(s) used in \u0026lt;GraphLayout\u0026gt; to the xsl file use the namespace in the templates Take a look at the 03.xsl file:\nthe default namespace of the \u0026lt;GraphLayout\u0026gt; tag has to be added to the xsl:stylesheet declaration and prefixed to use it later; I chose the gl prefix; I can skip the remaining namespaces mssgle and assembly as I don\u0026rsquo;t use them in the 03.xml example the gl name has to be used in the GraphLayout template and all its descendants \u0026lt;xsl:stylesheet version=\u0026#34;1.0\u0026#34; xmlns:xsl=\u0026#34;http://www.w3.org/1999/XSL/Transform\u0026#34; **xmlns:gl=\u0026#34;clr-namespace:Microsoft.SqlServer.IntegrationServices.Designer.Model.Serialization;assembly=Microsoft.SqlServer.IntegrationServices.Graph\u0026#34;** \u0026gt; (...) \u0026lt;xsl:template match=\u0026#34;LayoutInfo\u0026#34;\u0026gt; \u0026lt;xsl:apply-templates select=\u0026#34;**gl:GraphLayout**\u0026#34; /\u0026gt; \u0026lt;/xsl:template\u0026gt; \u0026lt;xsl:template match=\u0026#34;**gl:GraphLayout**\u0026#34;\u0026gt; \u0026lt;xsl:apply-templates select=\u0026#34;**gl:NodeLayout**\u0026#34; /\u0026gt; \u0026lt;/xsl:template\u0026gt; \u0026lt;xsl:template match=\u0026#34;**gl:NodeLayout**\u0026#34;\u0026gt; (...) The drawing\rHaving all the preparation steps completed, I can now draw the Beginning.xml part of the package. I use the package2svg.xsl that contains the previously learnt things and expand it with the new elements:\nthe gl:GraphLayout contains `gl:NodeLayout`, `gl:EdgeLayout` and `gl:AnnotationLayout` I want to display the names of the nodes (they are stored as the last part of the path in the Id attribute of the NodeLayout), so I use the \u0026lt;text\u0026gt; element I position the text inside the node, that\u0026rsquo;s why I add 5 pixels to the x position and calculate about the half of the height of the node for the y position and add about half the height of the font (all based on trial-and-error, I will make it more dynamic later) To use the calculations, I have to cast the values to number()s to get the last element of the path, I use the tokenize() function on the Id attribute providing the backslash (escaped) as the separator and save the result to the xsl:variable nodeNameTokens the nodeNameTokens holds the collection, and I\u0026rsquo;m interested only in the last element of it, so I use $nodeNameTokens[last()] for the description of the node the tokenize() function is available since XSLT 2.0, so I have to change the declaration of the xsl file to xsl:stylesheet version=\u0026quot;2.0\u0026quot; I want to have a light grey background of the image, so I use the trick with \u0026lt;rect width=\u0026quot;100%\u0026quot; height=\u0026quot;100%\u0026quot;\u0026gt; on the whole surface to draw the edges, I use the \u0026lt;line\u0026gt; tag in the gl:EdgeLayout template; it\u0026rsquo;s a simple package with the straight lines, so I don\u0026rsquo;t care about the drawing the curves the (x1, y1) and (x2, y2) coordinates are calculated using the TopLeft attribute of the EdgeLayout and the End attribute of the mssgle:Curve (I also added this namespace to the xsl:stylesheet declaration) the lines of the edges don\u0026rsquo;t connect the elements, so I add the little triangle at the end using the \u0026lt;polygon\u0026gt; the triangle\u0026rsquo;s sides are calculated using the ends of the edges (mssgle:Curve/@End) and the connectors (mssgle:Curve/@EndConnector) the \u0026lt;polygon\u0026gt; element has the points attribute, where I have to provide the coordinates of the points x,y separated by a space (like: \u0026lt;polygon points=\u0026quot;10,10 10,15 15,15\u0026quot; /\u0026gt;), so I use \u0026lt;xsl:text\u0026gt; \u0026lt;/xsl:text\u0026gt; to put a space between the calculated positions the AnnotationElement is drawn similarly to the NodeElement with an additional border around it (to see that the rectangle exists); compared to the original package there is a room for improvement in the text positioning \u0026lt;xsl:stylesheet **version=\u0026#34;2.0\u0026#34;** xmlns:xsl=\u0026#34;http://www.w3.org/1999/XSL/Transform\u0026#34; xmlns:gl=\u0026#34;clr-namespace:Microsoft.SqlServer.IntegrationServices.Designer.Model.Serialization;assembly=Microsoft.SqlServer.IntegrationServices.Graph\u0026#34; xmlns:mssgle=\u0026#34;clr-namespace:Microsoft.SqlServer.Graph.LayoutEngine;assembly=Microsoft.SqlServer.Graph\u0026#34; \u0026gt; \u0026lt;xsl:output method=\u0026#34;xml\u0026#34; encoding=\u0026#34;UTF-8\u0026#34; indent=\u0026#34;yes\u0026#34; /\u0026gt; \u0026lt;xsl:template match=\u0026#34;/Objects\u0026#34;\u0026gt; \u0026lt;svg xmlns=\u0026#34;http://www.w3.org/2000/svg\u0026#34; viewBox = \u0026#34;0 0 600 600\u0026#34; \u0026gt; \u0026lt;rect width=\u0026#34;100%\u0026#34; height=\u0026#34;100%\u0026#34; fill=\u0026#34;lightgray\u0026#34;/\u0026gt; \u0026lt;xsl:apply-templates select=\u0026#34;Package\u0026#34; /\u0026gt; \u0026lt;/svg\u0026gt; \u0026lt;/xsl:template\u0026gt; \u0026lt;xsl:template match=\u0026#34;Package\u0026#34;\u0026gt; \u0026lt;xsl:apply-templates select=\u0026#34;LayoutInfo\u0026#34; /\u0026gt; \u0026lt;/xsl:template\u0026gt; \u0026lt;xsl:template match=\u0026#34;LayoutInfo\u0026#34;\u0026gt; \u0026lt;xsl:apply-templates select=\u0026#34;gl:GraphLayout\u0026#34; /\u0026gt; \u0026lt;/xsl:template\u0026gt; \u0026lt;xsl:template match=\u0026#34;gl:GraphLayout\u0026#34;\u0026gt; \u0026lt;xsl:apply-templates select=\u0026#34;gl:NodeLayout\u0026#34; /\u0026gt; \u0026lt;xsl:apply-templates select=\u0026#34;gl:EdgeLayout\u0026#34; /\u0026gt; \u0026lt;xsl:apply-templates select=\u0026#34;gl:AnnotationLayout\u0026#34; /\u0026gt; \u0026lt;/xsl:template\u0026gt; \u0026lt;xsl:template match=\u0026#34;gl:NodeLayout\u0026#34;\u0026gt; \u0026lt;xsl:variable name=\u0026#34;**nodeNameTokens**\u0026#34; select=\u0026#34;tokenize(@Id, \u0026#39;\\\\\u0026#39;)\u0026#34; /\u0026gt; \u0026lt;g xmlns=\u0026#34;http://www.w3.org/2000/svg\u0026#34;\u0026gt; \u0026lt;rect\u0026gt; \u0026lt;xsl:attribute name=\u0026#34;x\u0026#34;\u0026gt;\u0026lt;xsl:value-of select=\u0026#34;substring-before(@TopLeft, \u0026#39;,\u0026#39;)\u0026#34;/\u0026gt;\u0026lt;/xsl:attribute\u0026gt; \u0026lt;xsl:attribute name=\u0026#34;y\u0026#34;\u0026gt;\u0026lt;xsl:value-of select=\u0026#34;substring-after(@TopLeft, \u0026#39;,\u0026#39;)\u0026#34;/\u0026gt;\u0026lt;/xsl:attribute\u0026gt; \u0026lt;xsl:attribute name=\u0026#34;rx\u0026#34;\u0026gt;10\u0026lt;/xsl:attribute\u0026gt; \u0026lt;xsl:attribute name=\u0026#34;ry\u0026#34;\u0026gt;10\u0026lt;/xsl:attribute\u0026gt; \u0026lt;xsl:attribute name=\u0026#34;width\u0026#34;\u0026gt;\u0026lt;xsl:value-of select=\u0026#34;substring-before(@Size, \u0026#39;,\u0026#39;)\u0026#34;/\u0026gt;\u0026lt;/xsl:attribute\u0026gt; \u0026lt;xsl:attribute name=\u0026#34;height\u0026#34;\u0026gt;\u0026lt;xsl:value-of select=\u0026#34;substring-after(@Size, \u0026#39;,\u0026#39;)\u0026#34;/\u0026gt;\u0026lt;/xsl:attribute\u0026gt; \u0026lt;xsl:attribute name=\u0026#34;fill\u0026#34;\u0026gt;white\u0026lt;/xsl:attribute\u0026gt; \u0026lt;xsl:attribute name=\u0026#34;stroke\u0026#34;\u0026gt;black\u0026lt;/xsl:attribute\u0026gt; \u0026lt;xsl:attribute name=\u0026#34;stroke-width\u0026#34;\u0026gt;1\u0026lt;/xsl:attribute\u0026gt; \u0026lt;/rect\u0026gt; \u0026lt;text\u0026gt; \u0026lt;xsl:attribute name=\u0026#34;x\u0026#34;\u0026gt;\u0026lt;xsl:value-of select=\u0026#34;number(substring-before(@TopLeft, \u0026#39;,\u0026#39;)) + 5\u0026#34;/\u0026gt;\u0026lt;/xsl:attribute\u0026gt; \u0026lt;xsl:attribute name=\u0026#34;y\u0026#34;\u0026gt;\u0026lt;xsl:value-of select=\u0026#34;number(substring-after(@TopLeft, \u0026#39;,\u0026#39;)) + (number(substring-after(@Size, \u0026#39;,\u0026#39;)) div 2) + 7\u0026#34;/\u0026gt;\u0026lt;/xsl:attribute\u0026gt; \u0026lt;xsl:attribute name=\u0026#34;fill\u0026#34;\u0026gt;black\u0026lt;/xsl:attribute\u0026gt; \u0026lt;xsl:attribute name=\u0026#34;font-family\u0026#34;\u0026gt;Verdana\u0026lt;/xsl:attribute\u0026gt; \u0026lt;xsl:attribute name=\u0026#34;font-size\u0026#34;\u0026gt;12\u0026lt;/xsl:attribute\u0026gt; \u0026lt;xsl:value-of select=\u0026#34;$nodeNameTokens\\[last()\\]\u0026#34;/\u0026gt; \u0026lt;/text\u0026gt; \u0026lt;/g\u0026gt; \u0026lt;/xsl:template\u0026gt; \u0026lt;xsl:template match=\u0026#34;gl:EdgeLayout\u0026#34;\u0026gt; \u0026lt;g xmlns=\u0026#34;http://www.w3.org/2000/svg\u0026#34;\u0026gt; \u0026lt;line\u0026gt; \u0026lt;xsl:attribute name=\u0026#34;x1\u0026#34;\u0026gt;\u0026lt;xsl:value-of select=\u0026#34;substring-before(@TopLeft, \u0026#39;,\u0026#39;)\u0026#34;/\u0026gt;\u0026lt;/xsl:attribute\u0026gt; \u0026lt;xsl:attribute name=\u0026#34;y1\u0026#34;\u0026gt;\u0026lt;xsl:value-of select=\u0026#34;substring-after(@TopLeft, \u0026#39;,\u0026#39;)\u0026#34;/\u0026gt;\u0026lt;/xsl:attribute\u0026gt; \u0026lt;xsl:attribute name=\u0026#34;style\u0026#34;\u0026gt;stroke:#006600\u0026lt;/xsl:attribute\u0026gt; \u0026lt;xsl:attribute name=\u0026#34;x2\u0026#34;\u0026gt;\u0026lt;xsl:value-of select=\u0026#34;number(substring-before(@TopLeft, \u0026#39;,\u0026#39;)) + number(substring-before(gl:EdgeLayout.Curve/mssgle:Curve/@End, \u0026#39;,\u0026#39;))\u0026#34;/\u0026gt;\u0026lt;/xsl:attribute\u0026gt; \u0026lt;xsl:attribute name=\u0026#34;y2\u0026#34;\u0026gt;\u0026lt;xsl:value-of select=\u0026#34;number(substring-after(@TopLeft, \u0026#39;,\u0026#39;)) + number(substring-after(gl:EdgeLayout.Curve/mssgle:Curve/@End, \u0026#39;,\u0026#39;))\u0026#34;/\u0026gt;\u0026lt;/xsl:attribute\u0026gt; \u0026lt;/line\u0026gt; \u0026lt;polygon\u0026gt; \u0026lt;xsl:attribute name=\u0026#34;points\u0026#34;\u0026gt; \u0026lt;xsl:value-of select=\u0026#34;number(substring-before(@TopLeft, \u0026#39;,\u0026#39;)) + number(substring-before(gl:EdgeLayout.Curve/mssgle:Curve/@End, \u0026#39;,\u0026#39;)) - 3\u0026#34;/\u0026gt;,\u0026lt;xsl:value-of select=\u0026#34;number(substring-after(@TopLeft, \u0026#39;,\u0026#39;)) + number(substring-after(gl:EdgeLayout.Curve/mssgle:Curve/@End, \u0026#39;,\u0026#39;))\u0026#34;/\u0026gt; \u0026lt;xsl:text\u0026gt; \u0026lt;/xsl:text\u0026gt; \u0026lt;xsl:value-of select=\u0026#34;number(substring-before(@TopLeft, \u0026#39;,\u0026#39;)) + number(substring-before(gl:EdgeLayout.Curve/mssgle:Curve/@End, \u0026#39;,\u0026#39;)) + 3\u0026#34;/\u0026gt;,\u0026lt;xsl:value-of select=\u0026#34;number(substring-after(@TopLeft, \u0026#39;,\u0026#39;)) + number(substring-after(gl:EdgeLayout.Curve/mssgle:Curve/@End, \u0026#39;,\u0026#39;))\u0026#34;/\u0026gt; \u0026lt;xsl:text\u0026gt; \u0026lt;/xsl:text\u0026gt; \u0026lt;xsl:value-of select=\u0026#34;number(substring-before(@TopLeft, \u0026#39;,\u0026#39;)) + number(substring-before(gl:EdgeLayout.Curve/mssgle:Curve/@EndConnector, \u0026#39;,\u0026#39;))\u0026#34;/\u0026gt;,\u0026lt;xsl:value-of select=\u0026#34;number(substring-after(@TopLeft, \u0026#39;,\u0026#39;)) + number(substring-after(gl:EdgeLayout.Curve/mssgle:Curve/@EndConnector, \u0026#39;,\u0026#39;))\u0026#34;/\u0026gt; \u0026lt;/xsl:attribute\u0026gt; \u0026lt;xsl:attribute name=\u0026#34;style\u0026#34;\u0026gt;stroke:#006600; fill:#006600\u0026lt;/xsl:attribute\u0026gt; \u0026lt;/polygon\u0026gt; \u0026lt;/g\u0026gt; \u0026lt;/xsl:template\u0026gt; \u0026lt;xsl:template match=\u0026#34;gl:AnnotationLayout\u0026#34;\u0026gt; \u0026lt;g xmlns=\u0026#34;http://www.w3.org/2000/svg\u0026#34;\u0026gt; \u0026lt;text\u0026gt; \u0026lt;xsl:attribute name=\u0026#34;x\u0026#34;\u0026gt;\u0026lt;xsl:value-of select=\u0026#34;number(substring-before(@TopLeft, \u0026#39;,\u0026#39;))\u0026#34;/\u0026gt;\u0026lt;/xsl:attribute\u0026gt; \u0026lt;xsl:attribute name=\u0026#34;y\u0026#34;\u0026gt;\u0026lt;xsl:value-of select=\u0026#34;number(substring-after(@TopLeft, \u0026#39;,\u0026#39;)) + (number(substring-after(@Size, \u0026#39;,\u0026#39;)) div 2)\u0026#34;/\u0026gt;\u0026lt;/xsl:attribute\u0026gt; \u0026lt;xsl:attribute name=\u0026#34;fill\u0026#34;\u0026gt;black\u0026lt;/xsl:attribute\u0026gt; \u0026lt;xsl:attribute name=\u0026#34;font-family\u0026#34;\u0026gt;Verdana\u0026lt;/xsl:attribute\u0026gt; \u0026lt;xsl:attribute name=\u0026#34;font-size\u0026#34;\u0026gt;12\u0026lt;/xsl:attribute\u0026gt; \u0026lt;xsl:value-of select=\u0026#34;@Text\u0026#34;/\u0026gt; \u0026lt;/text\u0026gt; \u0026lt;rect\u0026gt; \u0026lt;xsl:attribute name=\u0026#34;x\u0026#34;\u0026gt;\u0026lt;xsl:value-of select=\u0026#34;substring-before(@TopLeft, \u0026#39;,\u0026#39;)\u0026#34;/\u0026gt;\u0026lt;/xsl:attribute\u0026gt; \u0026lt;xsl:attribute name=\u0026#34;y\u0026#34;\u0026gt;\u0026lt;xsl:value-of select=\u0026#34;substring-after(@TopLeft, \u0026#39;,\u0026#39;)\u0026#34;/\u0026gt;\u0026lt;/xsl:attribute\u0026gt; \u0026lt;xsl:attribute name=\u0026#34;width\u0026#34;\u0026gt;\u0026lt;xsl:value-of select=\u0026#34;substring-before(@Size, \u0026#39;,\u0026#39;)\u0026#34;/\u0026gt;\u0026lt;/xsl:attribute\u0026gt; \u0026lt;xsl:attribute name=\u0026#34;height\u0026#34;\u0026gt;\u0026lt;xsl:value-of select=\u0026#34;substring-after(@Size, \u0026#39;,\u0026#39;)\u0026#34;/\u0026gt;\u0026lt;/xsl:attribute\u0026gt; \u0026lt;xsl:attribute name=\u0026#34;fill\u0026#34;\u0026gt;none\u0026lt;/xsl:attribute\u0026gt; \u0026lt;xsl:attribute name=\u0026#34;stroke\u0026#34;\u0026gt;black\u0026lt;/xsl:attribute\u0026gt; \u0026lt;xsl:attribute name=\u0026#34;stroke-width\u0026#34;\u0026gt;1\u0026lt;/xsl:attribute\u0026gt; \u0026lt;/rect\u0026gt; \u0026lt;/g\u0026gt; \u0026lt;/xsl:template\u0026gt; \u0026lt;/xsl:stylesheet\u0026gt; The result\rThe current drawing does not look as nice as the original SSIS, but hey - it\u0026rsquo;s the same layout! So, let\u0026rsquo;s draw something more complicated - the sequences with the elements aligned horizontally, vertically, with split paths:\nThe plan of the image is stored in the Sample2.xml file. When I run the transformation using diagram2svg.xsl file I get the following:\nNo sequences, some elements are hidden, no path annotations. A mess. I knew I did not implement all the features, but I wanted to see the aligned nodes. Well, not this time. Adding the sequence objects, curved arrows and cleaning the XSLT, to get a picture below is a subject for the next part of the series.\n",
"ref": "/2019/07/08/draw-the-ssis-package-using-svg-part-i/"
},{
"title": "Working with properties in ssisUnit",
"date": "",
"description": "",
"body": "One of the ssisUnit commands is a PropertyCommand. It allows you to read or set a property of the task, the package or the project. As of the time of writing - you can\u0026rsquo;t test the properties of the precedence constraints or data flow elements (but you can\u0026rsquo;t currently test data flow at all).\nHow do you use it?\nThe command is simple. You can get or set the property using the value for given property path. As usual - when you get the value, you leave the value blank. The path - well - is the path to the element in the package or the project. You use backslashes to separate elements in the package tree, and at the end, you use .Properties[PropertyName] to read the property. If you use the elements collection - like connection managers - you can pick a single element using square brackets and the name of this element.\nWhen you take a look into the source code (\\ssisUnit\\SSISUnit\\PropertyCommand.cs), you will see the examples like:\n\\Project\\ConnectionManagers[localhost.AdventureWorks2012.conmgr].Properties[ConnectionString] \\Package.Properties[CreationDate] \\Package.Connections[localhost.AdventureWorksDW2008].Properties[Description] \\Package.EventHandlers[OnError].Properties[Description] \\Package\\Sequence Container\\Script Task.Properties[Description] \\Package.EventHandlers[OnError].Variables[System::Cancel].Properties[Value] They all have one thing in common - they start with a backslash. It\u0026rsquo;s not required though, it\u0026rsquo;s a convention. I wrote that the PropertyPath uses the backslashes as the element separator. In the examples you see, that backslash is used interchangeably with a dot. In fact, it does not matter - during the parsing phase, all the backslashes are converted to dots.\nHow do you write the path? You can take a look at the package and name each part that you need to walk through, to get to your element. Or simply - when you take a look at any SSIS package source code, you will see, that each DTS:Executable has the DTS:refId attribute, that contains the path to the element in the package. You can safely use it as a path for the PropertyPath, or you can add a leading backslash.\nIn what scenarios you can use the PropertyCommand?\nchecking your programming standard (like verifying if DelayValidation always turned off or if the elements have a description other than the default) overwriting your project connection manager connection string (if you want to run the tests on the different server) automated testing of DFT buffer (you could run few tests with different values of DefaultBufferMaxRows and/or DefaultBufferSize and check the loading times) Personally, up to now, I used only the project connection manager overwriting. It wasn\u0026rsquo;t available until the recent ssisUnit update, but now you can use it in your projects.\n",
"ref": "/2018/11/05/working-with-properties-in-ssisunit/"
},{
"title": "What's new in ssisUnit?",
"date": "",
"description": "",
"body": "ssisUnit has a stable version for SSIS 2005 - 2014. It didn\u0026rsquo;t change much since August 2014, until August 2017. Then my Pull Request was merged, and it added some new functionality for ssisUnit.\nFirst - it works with SSIS 2017. You can probably use it with SSIS 2016 packages, but I didn\u0026rsquo;t test it yet. Although - you can\u0026rsquo;t check everything - there are problems when you want to use Control Flow Templates in your packages. When I tried to read the variable from the container included from the template - it hangs for a few seconds and returns an incorrect result. It\u0026rsquo;s something to investigate later.\nSecond - you can get and set the properties of the project and its elements. Like - overwriting project connection managers (I designed it with this particular need on my mind). You can now set the connection string the different server (or database) - in the PropertyPath of the PropertyCommand use \\Project\\ConnectionManagers, write the name of the connection manager with the extension, and use one of the Properties. You can do it during the Test setup (or all tests setup), but not during the test suite setup, as ssisUnit is not aware of the project until it loads it into the memory.\nThird - I added simple Dataset viewer/editor. It\u0026rsquo;s still a work in progress, but you can already use it to either visualise the data or to keep it within the test file (set IsResultsStored to true, open the DataSet and save the test suite file). You don\u0026rsquo;t have to prepare the XML representation of the dataset manually.\nThere are also some minor changes:\nFixed \u0026ldquo;No rows can be added to a DataGridView control that does not have columns. Columns must be added first\u0026rdquo; error in test results window Dataset, PackageRef, ConnectionRef extend SsisUnitBaseObject - now you can see XML code in the GUI for these elements Query Editor resizes with the window and returns unmodified code on cancel added several library tests (a great way to get to know the ssisUnit model better!) If you want to get the latest version - go to the GitHub repository, download or clone the source code, compile it, and run. That easy! In case you have some problems with compilation (or ssisUnit not working correctly) - open an issue or write a comment to this post.\n",
"ref": "/2018/09/12/whats-new-in-ssisunit/"
},{
"title": "Writing ssisUnit test using API",
"date": "",
"description": "",
"body": "In the post about using MSTest framework to execute ssisUnit tests, I used parts of the ssisUnit API model. If you want, you can write all your tests using this model, and this post will guide you through the first steps. I will show you how to write one of the previously prepared XML tests using C# and (again) MSTest.\nWhy MSTest? Because I don\u0026rsquo;t want to write some application that will contain all the tests I want to run, display if they pass or not. When I write the MSTest tests, I can run them using the Test Explorer in VS, using a command line, or in TFS.\nThe preparations\rI create a new project ssisUnitLearning.API within the ssisUnitLearning solution using right-click on a solution name, then Add \u0026gt; New Project \u0026gt; Visual C# \u0026gt; Class library (.NET framework) and using .NET 4.5 as a target.\nI rename the newly created Class1.cs file to Test_15_Users_Dataset.cs and will write the test from scratch. I set up the references - I need SSISUnit2017.dll(I will work with the latest version of ssisUnit compiled for SSIS 2017, but the standard SSISUnit2012.dll will work too) and SSISUnitBase.dll. I clear all the default references.\nI will use MSTest v2. I didn\u0026rsquo;t create a new project as a test project, but as a library, so I will add the testing framework using NuGet. Right-click the References node, and choose Manage NuGet packages and in the Browse section search for MsTest. Choose MsTest.Framework and MsTest.TestAdapter and install them. At the time of writing, I work with version 1.3.2.\nLast thing before I start writing the tests - I set up namespaces for ssisUnit and MsTest:\nusing SsisUnit; using SsisUnitBase.Enums; using SsisUnit.Packages; using SsisUnit.Enums; using Microsoft.VisualStudio.TestTools.UnitTesting; The reference\rI chose the 15_Users_Dataset.ssisUnit file as a reference. I will write the same test using the API.\nThe referenced ssisUnit test file contains:\nthe connection to the ssisUnitLearningDB database the reference to the 15_Users_Dataset package two datasets: expected and actual one test with setup, teardown and two asserts The API idea is simple: create the objects (connections, tests, asserts, commands etc.), add them to the ssisUnit test suite object and execute the suite.\nWriting the test\rI start with the scaffolding:\nnamespace ssisUnitLearning.API { [TestClass] public class Test_15_Users_Dataset { [TestMethod] public void SQL_MERGE_Users_Empty_table() { } } } I assume one ssisUnit test suite as one class and one ssisUnit test as one method. The class\u0026rsquo; name cannot begin with the number, so I add the Test_ prefix. I add the TestClass, and TestMethod attributes to expose my code to the MsTest framework. The test method is void and without the parameters.\nThe central element in the ssisUnit API is the SsisUnitSuite class - it contains all the test suite objects. At the beginning I create an empty object:\nSsisTestSuite ts = new SsisTestSuite(); Then I create the package and the database connection references (the code is split into the separate lines for readability):\n// the package to test PackageRef p = new PackageRef( \u0026#34;15_Users_Dataset\u0026#34;, @\u0026#34;C:\\Users\\Administrator\\source\\repos\\ssisUnitLearning\\ssisUnitLearning\\bin\\Development\\ssisUnitLearning.ispac\u0026#34;, \u0026#34;15_Users_Dataset.dtsx\u0026#34;, PackageStorageType.FileSystem ); // the connection for the datasets ConnectionRef c = new ConnectionRef( \u0026#34;ssisUnitLearningDB\u0026#34;, @\u0026#34;Provider=SQLNCLI11.1;Data Source=.\\SQL2017;Integrated Security=SSPI;Initial Catalog=ssisUnitLearningDB;Auto Translate=False\u0026#34;, ConnectionRef.ConnectionTypeEnum.ConnectionString ); The package reference is created with a 15_Users_Dataset name. The package is a part of the project saved in the file system, so I add the full path to the .ispac file (the project), the name of the package within the project (.dtsx file), and set the storage type. The connection reference has the name ssisUnitLearningDB and is stored as a connection string.\nNow I can add those two objects to the test suite. Because I can have many packages and database connections, they are stored in lists:\nts.ConnectionList.Add(c.ReferenceName, c); ts.PackageList.Add(p.Name, p); The next objects are the datasets - expected and actual. They are created with a reference to the test suite, a name, reference to the database connection and a SQL command. The false in the definitions is that the datasets will not store the results in the test suite. One thing that is a bit misleading is the reference to the test suite (the ts object). It\u0026rsquo;s not used to add the dataset to the test suite, but to make the dataset aware of the test suite. It\u0026rsquo;s because of some ssisUnit model designs, mostly used for the progress reporting and the test suite statistics. We add the datasets to the suite using the Add() method on the Datasets list.\nDataset expected = new Dataset( ts, \u0026#34;Empty table test: expected dataset\u0026#34;, c, false, @\u0026#34;SELECT * FROM( VALUES (CAST(\u0026#39;Name 1\u0026#39; AS VARCHAR(50)), CAST(\u0026#39;Login 1\u0026#39; AS CHAR(12)), CAST(1 AS BIT), CAST(1 AS INT), CAST(2 AS TINYINT), CAST(0 AS BIT)), (CAST(\u0026#39;Name 2\u0026#39; AS VARCHAR(50)), CAST(\u0026#39;Login 2\u0026#39; AS CHAR(12)), CAST(1 AS BIT), CAST(2 AS INT), CAST(2 AS TINYINT), CAST(0 AS BIT)), (CAST(\u0026#39;Name 3\u0026#39; AS VARCHAR(50)), CAST(\u0026#39;Login 3\u0026#39; AS CHAR(12)), CAST(0 AS BIT), CAST(3 AS INT), CAST(2 AS TINYINT), CAST(0 AS BIT)) )x(Name, Login, IsActive, Id, SourceSystemId, IsDeleted) ORDER BY Id; \u0026#34;); Dataset actual = new Dataset( ts, \u0026#34;Empty table test: actual dataset\u0026#34;, c, false, @\u0026#34;SELECT Name, Login, IsActive, SourceId, SourceSystemId, IsDeleted FROM dbo.Users ORDER BY SourceId;\u0026#34;); // add the datasets to the test suite ts.Datasets.Add(expected.Name, expected); ts.Datasets.Add(actual.Name, actual); The same situation with the test and other objects - we make them aware of the test suite and add them to the same test suite. The test has a name (SQL MERGE Users: Empty table), references the 15_Users_Dataset package, has no password (null) and works with the SQL Merge Users task ({FB549B65-6F0D-4794-BA8E-3FF975A6AE0B}). As for the last part - you can set the task object either as the ID of the element in the SSIS package (as in the example) or the PackagePath. I chose the ID, as I copied it from the .ssisUnit file (the wizard in the ssisUnit GUI works with the IDs)\nTest t = new Test( ts, \u0026#34;SQL MERGE Users: Empty table\u0026#34;, \u0026#34;15_Users_Dataset\u0026#34;, null, \u0026#34;{FB549B65-6F0D-4794-BA8E-3FF975A6AE0B}\u0026#34; ); ts.Tests.Add(t.Name, t); The test has a setup command. The stg.Users table is empty, so I use the SqlCommand to fill it with the data. In the end, I add the command to the collection of the TestSetup commands of the test.\nSqlCommand s1 = new SqlCommand( ts, \u0026#34;ssisUnitLearningDB\u0026#34;, false, @\u0026#34;WITH stgUsers AS ( SELECT * FROM ( VALUES (\u0026#39;Name 1\u0026#39;, \u0026#39;Login 1\u0026#39;, 1, 1, 2, -1), (\u0026#39;Name 2\u0026#39;, \u0026#39;Login 2\u0026#39;, 1, 2, 2, -1), (\u0026#39;Name 3\u0026#39;, \u0026#39;Login 3\u0026#39;, 0, 3, 2, -1) )x (Name, Login, IsActive, Id, SourceSystemId, InsertedAuditId) ) INSERT INTO stg.Users ( Name, Login, IsActive, Id, SourceSystemId, InsertedAuditId ) SELECT Name, Login, IsActive, Id, SourceSystemId, InsertedAuditId FROM stgUsers ;\u0026#34;); t.TestSetup.Commands.Add(s1); The test has two asserts:\nchecking if the dbo.Users table has 3 records, and if those 3 records look like the expected dataset. The assert is a definition of the expected result, and it executes a command to get the actual. Take a look at the first assert\u0026rsquo;s definition. It has the reference to the test suite (ts), the test (t), has a name (Assert: Added 3 records), expects 3 as a result, and is executed after the task executes (false). It runs a SqlCommand referencing the test suite (ts) using ssisUnitLearningDB connection reference, returns a value (true), and the command to run is SELECT COUNT(*) FROM dbo.Users.\nSimilar with the second assert, the difference is the command - a DatasetCommand - that compares the expected and the actual datasets and has no name (\u0026quot;\u0026quot;). After the asserts are created, I add them to the test.\nSsisAssert a1 = new SsisAssert( ts, t, \u0026#34;Assert: Added 3 records\u0026#34;, 3, false); a1.Command = new SqlCommand( ts, \u0026#34;ssisUnitLearningDB\u0026#34;, true, \u0026#34;SELECT COUNT(*) FROM dbo.Users;\u0026#34; ); SsisAssert a2 = new SsisAssert( ts, t, \u0026#34;Assert: dbo.Users has expected records\u0026#34;, true, false ); a2.Command = new DataCompareCommand( ts, \u0026#34;\u0026#34;, expected, actual ); t.Asserts.Add(a1.Name, a1); t.Asserts.Add(a2.Name, a2); The last part of the test is the TestTeardown. I tidy up after the tests running TRUNCATE TABLE commands on the stg.Users and dbo.Users tables. The commands are added to the TestTeardown collection.\nSqlCommand t1 = new SqlCommand( ts, \u0026#34;ssisUnitLearningDb\u0026#34;, false, \u0026#34;TRUNCATE TABLE stg.Users;\u0026#34; ); SqlCommand t2 = new SqlCommand( ts, \u0026#34;ssisUnitLearningDb\u0026#34;, false, \u0026#34;TRUNCATE TABLE dbo.Users;\u0026#34; ); t.TestTeardown.Commands.Add(t1); t.TestTeardown.Commands.Add(t2); Finally, the test and all the test suite is ready, and I can run it using the Execute() command.\nts.Execute(); To check if all the ssisUnit test suite asserts finished successfully I use Assert.AreEqual() command of the MsTest. I take the number of the ssisUnit asserts that passed and compare it to the expected value. The ssisUnit test suite holds the Statistics object with the numbers of tests and asserts executed, passed and failed, so I use it to get the value:\nAssert.AreEqual(2, ts.Statistics.GetStatistic(StatisticEnum.AssertPassedCount)); When I run this, I get the error: Message: Assert.AreEqual failed. Expected:\u0026lt;2\u0026gt;. Actual:\u0026lt;3\u0026gt;. Why 3?! I have only two asserts! The reason becomes clear when I run the .ssisUnit test in the GUI - the third assert comes from the TaskResult.\nSo, the proper command is:\nAssert.AreEqual(3, ts.Statistics.GetStatistic(StatisticEnum.AssertPassedCount)); And that\u0026rsquo;s it. The test suite is finished.\nSummary\rThe API model of the ssisUnit is not complicated, but sometimes its a bit unintuitive. I would like to operate more on the prepared objects than on their names, but maybe that\u0026rsquo;s just me. A bit odd (at the beginning) is fact, that I have to set the test suite object as the parameter of the objects other than CommandRef and PackageRef, and then also add the objects to the test suite.\nIf you want to know more about ssisUnit API model, I encourage you to read the code in the SsisUnit.Tests folder of the ssisUnit source code, as there\u0026rsquo;s a lot of examples how to use the API.\nThe full code is available on GitHub.\n",
"ref": "/2018/08/13/writing-ssisunit-test-using-api/"
},{
"title": "Testing the loops in ssisUnit",
"date": "",
"description": "",
"body": "In the Q \u0026amp; A post after the webinar on ssisUnit (in 2013) John Welch answered the question about the loops:\n\u0026ldquo;If possible, can you demo if a container can be executed? Especially a For loop or For Each loop?\u0026rdquo;\nI didn’t have time to demo this during the presentation. Good thing too, because there was an error around handling of containers. This has now been fixed in the source code version of the project.\nThere is no example though, so let\u0026rsquo;s add one.\nThe setup\rThe example is simple: it will add the number six within the For Loop, and I will check for the final results. The sample script 60_Loops.dtsx in the ssisUnitLearning project contains the loop, the expression and two variables: i and v. The first is used for the iterations, the second for keeping the final value. The formula of the expression is: @[User::v] = @[User::v] + 6.\nThe loop iterates 7 times. So the final value I\u0026rsquo;m expecting is 42 - the answer to life, the universe, everything. Instead of setting the OnPostExecute breakpoint at the container level and checking for the value I will build two quick tests - for the FLC Evaluate expression and EXPR Add 6 objects.\nThe tests\rI will add the tests using the File \u0026gt; New From Package ... option and select the container and the expression. There is no tests\u0026rsquo; setup/teardown needed, so I leave it blank. The asserts will use the VariableCommand to read the values of the v variable. For the container, I\u0026rsquo;m setting 42 as the ExpectedResult, and for the expression, I\u0026rsquo;m setting \u0026hellip; Wait for a moment and think: what value should be written? What are we testing?\nYou can congratulate yourself if you think the expected value is six. It\u0026rsquo;s the unit test, you are testing the individual component, so you just check the correctness of the expression. It\u0026rsquo;s the container that calculates the aggregated value.\nAfter setting up the tests and running you should have a working test, that verifies that the package works as expected.\nThe summary\rTesting the container is no different than the other SSIS tasks. The only catch is when you want to check the value of something inside the container. Remember, always think about it as the standalone component.\n",
"ref": "/2018/07/10/testing-the-loops-in-ssisunit/"
},{
"title": "Setting package references in ssisUnit",
"date": "",
"description": "",
"body": "When you set the packages\u0026rsquo; references in the ssisUnit tests you have four options for the source (StoragePath) of the package:\nFilesystem - references the package in the filesystem - either within a project or standalone MSDB - package stored in the msdb database Package store - packages managed by Integration Services Service SsisCatalog - references the package in the Integration Services Catalog In this post, I will show you how to set the package reference (PackageRef) for each option.\nFilesystem\rIn the previous posts about ssisUnit, I used the packages from the project located in the file system. So just to have a complete reference:\nif you use the standalone package - use the path to the package if you use the package in the project - use the path to the .ispac file, and then the name of the package (without the path) MSDB\rIf you use the legacy Package Deployment Model, you can store your packages in the msdb database. You have to provide the same details as in the SQL Agent\u0026rsquo;s job step for SSIS subsystem when you choose SQL Server as the package source:\nThe SQL Server instance name The full package path, starting with a backslash Note, that the package does not end with the .dtsx extension.\nPackage store\rIt\u0026rsquo;s also related to the legacy Package Deployment Model. This time you pick either the packages in the default folder for the Integration Services Service or the msdb database. In the documentation, you can find that the package store is related to the filesystem, but the package store really means the locations that the SQL Server Integration Services Service is aware of. Those locations are defined in the file MsDtsSrvr.ini.xml located in the C:\\Program Files\\Microsoft SQL Server\\140\\DTS\\Binn folder.\nBecause it\u0026rsquo;s managed by the service you set up:\nthe server name (not the instance name) the path to the package (also without the .dtsx extension) SSIS Catalog\rWhen you use the SsisCatalog option:\nprovide the name of the SQL Server, where the Integration Services Catalog is stored set the path to the project set the full name of the package Currently, only the Windows Authentication is supported, so run ssisUnit with the account that has the proper privileges. Also note, that when you set up the path to the package in the SQL Agent SSIS step, you use the full path to the package, like \\SSISDB\\ssisUnit\\ssisUnitLearning\\60_Loops.dtsx. In ssisUnit, you don\u0026rsquo;t use the \\SSISDB\\ part.\n",
"ref": "/2018/07/05/setting-package-references-in-ssisunit/"
},{
"title": "Executing ssisUnit tests in MSTest framework",
"date": "",
"description": "",
"body": "One of the drawbacks of ssisUnit is that it has only its own test runner. You can run the tests either using GUI or the console application, but the output is not that easy to parse and to present on the report. I\u0026rsquo;m used to working with Pester output or using NUnit/MSTest frameworks that integrates nicely with other tools. In this post, I will show you how to prepare and execute ssisUnit tests using MSTest framework, how to automate this process, and how to run those tests with TFS (or VSTS).\nI work a lot with TFS. It has a neat feature - you can run tests during the build or release and verify how your code is doing. The goal is: I want to run ssisUnit tests of my ssisUnitLearning project in TFS and see the outcome in the reports.\nOn the picture above three of my four tests failed. Select DFT_LoadUsers test - it failed because of the DefaultBufferMaxRow setting for the Data Flow Task. This test is written for 20_DataFlow.dtsx - the package to load the dbo.Users table using Data Flow Task, not SQL Task. There are two settings for the DFT LoadUsers I don\u0026rsquo;t want to be changed: DefaultBufferMaxRows and DelayValidation. Apparently, the number of rows is not set to the required value.\nPropertyCommand\rTo check for the package\u0026rsquo;s setting I use PropertyCommand - it can read almost everything you need from the package. Almost, because it does not read from the _Data Flow Task_s elements, so, for example, you won\u0026rsquo;t read the Derived Column, Conditional Split or OLEDB Destination parameters. To see what you are able to get or set, take a look at the LocatePropertyValue() method in the PropertyCommand.cs script. At the end you will find some examples, like:\n\\Package\\Sequence Container\\Script Task.Properties[Description] And through the method\u0026rsquo;s code, there are string checks for the keywords: Package, Variables, Properties, Connections, EventHandlers. You just have to write the path using one of them.\nTo verify my DFT_Users task, I use two paths:\n\\Package\\DFT LoadUsers.Properties[DefaultBufferMaxRows] \\Package\\DFT LoadUsers.Properties[DelayValidation] Nothing extraordinary, but works. You can use it for example to check the standards of your SSIS packages (like: every task\u0026rsquo;s Description has to be different than the default value or the checkpoint file has the same name as the package, but different extension, \u0026hellip;). The file 20_DataFlow.ssisUnit test is located in the Tests folder. Now, to the\nMSTest part\rMSTest v2 is the open source test framework by Microsoft. I will not write a lot about it. If you want to learn more - read the excellent blog posts by Gérald Barré.\nI took the idea and parts of the code from Ravi Palihena\u0026rsquo;s blog post about ssisUnit testing and his GitHub repository. Then I read the source code of the SsisUnitTestRunner, SsisUnitTestRunnerUI and posts by Gérald and changed the tests a bit.\nI will use MSTest to execute ssisUnit tests from the file 20_DataFlow.ssisUnit. For that, I created a new Visual C# \u0026gt; Test \u0026gt; Unit Test Project (.NET Framework) - ssisUnitLearning.MSTest - within the solution. I also set the reference to the SsisUnit2017.dll and SsisUnitBase.dll libraries and loaded required namespaces\nusing SsisUnit; using SsisUnitBase.EventArgs; I decided that each test file has its own class with the same name as the name of the ssisUnit file. But the name of the class can\u0026rsquo;t start with the number, so I added a TestUnit prefix. I load the ssisUnit tests at the beginning, and it\u0026rsquo;s enough to do it once, so I add the ClassInitialize attribute to the Init() method. The tests are executed from the .dll file created after compilation, and I chose to load the .ssisUnit file with all the tests using relative path.\n[ClassInitialize] public static void Init(TestContext tc) { // ssisUnitLearningMSTest.dll is in subfolder bin\\Debug and Tests folder is parallel, that\u0026#39;s why ..\\..\\.. testSuite = new SsisTestSuite(@\u0026#34;..\\..\\..\\Tests\\01_OnlyParametersAndVariables.ssisUnit\u0026#34;); } The Init() method has to follow the rules I got first from the error message: Method ssisUnitLearningMSTest.TestUnit_01_OnlyParametersAndVariables.Init has wrong signature. The method must be static, public, does not return a value and should take a single parameter of type TestContext. Additionally, if you are using async-await in method then return-type must be Task. Also the testSuite parameter, that holds the .ssisUnit file contents has to be static.\nprivate static SsisTestSuite testSuite; private SsisUnitBase.TestResult testResult; private Test test; private Context context; private bool isTestPassed; private List\u0026lt;string\u0026gt; messages = new List\u0026lt;string\u0026gt;(); The other variables are:\ntestResult holds the result of the ssisUnit test test is the ssisUnit test object context is the ssisUnit test context isTestPassed has the information if the test succeeded messages is a list of all assertions\u0026rsquo; errors in the ssisUnit test Each test is written as a class decorated by the TestMethod attribute. The tests are loaded to the testSuite object, so to get the ssisUnit test I want to run I get it from the Tests dictionary by the name. Then I create the context for it.\nssisUnit uses events and responses. When the assertion completes, the AssertCompleted event is called. I subscribe to this event before running the test and unsubscribe after the test is executed and all asserts are finished. I also set the flag isTestPassed to true. It will be set to false if any of the assertions fail.\nIn the end, I run two MSTest asserts. The second checks if all the assertions were successful. If not it writes all the error messages. But as I set isTestPassed flag as true in the beginning the MSTest\u0026rsquo;s assert will always return true, even if I don\u0026rsquo;t run the tests. So I added the first assertion to check if the test was executed. This can happen if there are errors in the .ssisUnit or .dtsx file.\n[TestMethod] public void DFT_LoadUsers() { test = testSuite.Tests[\u0026#34;DFT LoadUsers\u0026#34;]; context = testSuite.CreateContext(); testSuite.AssertCompleted += TestSuiteAssertCompleted; isTestPassed = true; bool rs = test.Execute(context); testSuite.AssertCompleted -= TestSuiteAssertCompleted; Assert.AreEqual(true, rs, \u0026#34;Package did not execute\u0026#34;); Assert.AreEqual(true, isTestPassed, System.String.Join(\u0026#34;;\u0026#34;, messages)); } The TestSuiteAssertCompleted method verifies the TestExecResult of each assertion and sets the isTestPassed flag. If there is an error, it\u0026rsquo;s added to the messages list.\nprivate void TestSuiteAssertCompleted(object sender, AssertCompletedEventArgs e) { if (e.AssertName != null) { testResult = e.TestExecResult; isTestPassed = isTestPassed \u0026amp; e.TestExecResult.TestPassed; if(e.TestExecResult.TestPassed == false) { messages.Add(e.AssertName + \u0026#34; failed: \u0026#34; + e.TestExecResult.TestResultMsg); } } } The full code:\nusing SsisUnit; using SsisUnitBase.EventArgs; using System.Collections.Generic; using Microsoft.VisualStudio.TestTools.UnitTesting; namespace ssisUnitLearningMSTestExample { [TestClass] public class TestUnit_20_DataFlow { private static SsisTestSuite testSuite; private SsisUnitBase.TestResult testResult; private Test test; private Context context; private bool isTestPassed; private List messages = new List(); [ClassInitialize] public static void Init(TestContext tc) { // ssisUnitLearningMSTest.dll is in subfolder bin\\\\Debug and Tests folder is parallel, that\u0026#39;s why ..\\..\\.. testSuite = new SsisTestSuite(@\u0026#34;..\\..\\..\\Tests\\20_DataFlow.ssisUnit\u0026#34;); } private void TestSuiteAssertCompleted(object sender, AssertCompletedEventArgs e) { if (e.AssertName != null) { testResult = e.TestExecResult; isTestPassed = isTestPassed \u0026amp; e.TestExecResult.TestPassed; if(e.TestExecResult.TestPassed == false) { messages.Add(e.AssertName + \u0026#34; failed: \u0026#34; + e.TestExecResult.TestResultMsg); } } } [TestMethod] public void DFT_LoadUsers() { test = testSuite.Tests[\u0026#34;DFT LoadUsers\u0026#34;]; context = testSuite.CreateContext(); testSuite.AssertCompleted += TestSuiteAssertCompleted; isTestPassed = true; bool rs = test.Execute(context); testSuite.AssertCompleted -= TestSuiteAssertCompleted; Assert.AreEqual(true, rs, \u0026#34;Package did not execute\u0026#34;); Assert.AreEqual(true, isTestPassed, System.String.Join(\u0026#34;;\u0026#34;, messages)); } } } There has to be an easier way\rIn the beginning, I created two MSTest files - first for 01_OnlyParametersAndVariables.ssisUnit and second for 20_DataFlow.ssisUnit. Only to see if it works, and how to use it. When the number of tests started to grow, I started searching for the automatic test generation. I found idea sparking post by Colin Breck, watched the T4 Templates course on Pluralsight and prepared the solution.\nI started by creating the template using right-click on the ssisUnitLearning.MSTest project, selecting New Item and the Text Template. Then I copied and pasted the content from one of the test files and modified it with two loops - for each .ssisUnit test file I create the class that contains the test method for each ssisUnit test. ReadTestFile() and CleanName() methods are modified versions of the code from the formerly mentioned course on Pluralsight. And that\u0026rsquo;s it! When you compile the project, the template will generate all the test classes and methods, and they will be available in the .dll file to use in the Test Explorer. The code is available in the ssisUnitTestTemplate.tt script.\nTFS part\rThe last thing is to use the MSTest project in the TFS build. The whole building and deployment process of the database, SSIS project, tests, configuration, etc. is a topic for one of the next posts, but for running the tests it\u0026rsquo;s enough to:\nuse the MSBuild or Visual Studio Build task to compile the project (I used MSBuild) use the Visual Studio Test task and leave all the defaults as is You should now have the reports in TFS that show you the ssisUnit tests results.\nSummary\rBasically, to run the ssisUnit tests using MSTest I had to port some parts of the ssisUnitTestRunner code to the framework. Then, to automate test generation, I used T4 template to iterate through all the test files and defined tests. Now it\u0026rsquo;s enough to prepare the next ssisUnit suite, save and compile the MSTest project to have all the tests available in Visual Studio\u0026rsquo;s Test Runner or - after the project builds - in TFS.\n",
"ref": "/2018/06/15/executing-ssisunit-tests-in-mstest-framework/"
},{
"title": "Books",
"date": "",
"description": "",
"body": "One day I asked myself a question - how many books do I read during the year? I assume it\u0026rsquo;s about 40, but wanted to check it. So for the year 2018, I started writing down the books I read. I published it on twitter, but some books took me quite some time to finish (like ~1000 pages \u0026ldquo;The Way Of Kings\u0026rdquo; by Brandon Sanderson), and I forgot the numbers. So - besides publishing on twitter, I began writing down right here. The dates (when I finished reading the book) start from #4 in 2018, as I started to pay attention to them.\n2024\rMichael J. Sullivan \u0026ldquo;Age of Myth\u0026rdquo; (PL: \u0026ldquo;Epoka Mitu\u0026rdquo;) 02.01.2024. Toshikazu Kawaguchi (PL: \u0026ldquo;Zanim wyblakną wspomnienia\u0026rdquo;) 10.01.2024. Michael J. Sullivan \u0026ldquo;Age of Swords\u0026rdquo; (PL: \u0026ldquo;Epoka Mieczy\u0026rdquo;) 25.01.2024. Marcin Mortka \u0026ldquo;Pas Ilmarinena\u0026rdquo; (PL) 02.02.2024. Jordan Akpojaro, Rachel Firth, Minna Lacey \u0026ldquo;Philosophy for Beginners\u0026rdquo; (PL: \u0026ldquo;Filozofia dla początkujących\u0026rdquo;) 03.02.2024. Joanna W. Gajzler \u0026ldquo;Necrovet. Metody leczenia drakonidów\u0026rdquo; (PL) 09.02.2024. Radomir Wit \u0026ldquo;Sejm Wita\u0026rdquo; (PL) 13.02.2024. Remigiusz Mróz \u0026ldquo;Operacja Mir\u0026rdquo; (PL) 21.02.2024. Jakub Sieczko \u0026ldquo;Pogo\u0026rdquo; (PL) 26.02.2024. Janina Bąk \u0026ldquo;Statystycznie rzecz biorąc 2\u0026rdquo; (PL) 09.03.2024. Nir Eyal \u0026ldquo;Hooked: How to Build Habit-Forming Products\u0026rdquo; (PL: \u0026ldquo;Skuszeni\u0026rdquo;) 18.03.2024. Elżbieta Cherezińska \u0026ldquo;Północna Droga. Saga Sigrun\u0026rdquo; 19.03.2024. Krzysztof Kościelski \u0026ldquo;Ego\u0026rdquo; (PL) 25.03.2024. Jack Welch \u0026ldquo;Winning\u0026rdquo; (PL: \u0026ldquo;Winning znaczy zwyciężać\u0026rdquo;) 29.03.2024. Olga Gitkiewicz \u0026ldquo;Nie zdążę\u0026rdquo; (PL) 03.04.2024. Brandon Sanderson \u0026ldquo;Sunlit Man\u0026rdquo; (PL: \u0026ldquo;Słoneczny mąż\u0026rdquo;) 10.04.2024. A. C. Cobble \u0026ldquo;Benjamin Ashwood\u0026rdquo; (PL: \u0026ldquo;Beniamin Ashwood\u0026rdquo;) 19.04.2024. Remigiusz Mróz \u0026ldquo;Paderborn\u0026rdquo; (PL) 25.04.2024. Klaudia Fryc-Mallick \u0026ldquo;Masala @ work. Przewodnik po współpracy z Indusami\u0026rdquo; (PL) 04.05.2024. A. C. Cobble \u0026ldquo;Endless Flight\u0026rdquo; (PL: \u0026ldquo;Nieustanna ucieczka\u0026rdquo;) 13.05.2024. Ray Nayler \u0026ldquo;The Mountain in the Sea\u0026rdquo; (PL: \u0026ldquo;Góra pod morzem\u0026rdquo;) 22.05.2024. Remigiusz Mróz \u0026ldquo;Berdo\u0026rdquo; (PL) 28.05.2024. Marta Kisiel \u0026ldquo;Mała draka w fińskiej dzielnicy (PL) 01.06.2024. Paweł Kopijer \u0026ldquo;Mrok we krwi\u0026rdquo; (PL) 09.06.2024. Paweł Kopijer \u0026ldquo;Czempion Semaela\u0026rdquo; (PL) 13.06.2024. Paweł Kopijer \u0026ldquo;Sploty przeznaczenia\u0026rdquo; (PL) 19.06.2024. T. Kingfisher \u0026ldquo;Thornhedge\u0026rdquo; (PL: \u0026ldquo;Cierń) 21.06.2024. A. C. Cobble \u0026ldquo;Dark Territory\u0026rdquo; (PL: \u0026ldquo;Wrogie terytorium\u0026rdquo;) 28.06.2024. A. C. Cobble \u0026ldquo;Empty Horizon\u0026rdquo; (PL: \u0026ldquo;Pusty horyzont\u0026rdquo;) 10.07.2024. Michael J. Sullivan \u0026ldquo;Age of War\u0026rdquo; (PL: \u0026ldquo;Epoka wojny\u0026rdquo;) 23.07.2024. Becky Chambers \u0026ldquo;A Psalm for the Wild-Built\u0026rdquo; (PL: \u0026ldquo;Psalm dla zbudowanych w dziczy\u0026rdquo;) 26.07.2024. William Zinsser \u0026ldquo;Writing to learn\u0026rdquo; 06.08.2024. Przemysław Duda \u0026ldquo;Miecze i słowa (PL) 10.08.2024. Janusz Stankiewicz \u0026ldquo;Gorath. Uderz pierwszy\u0026rdquo; (PL) 14.08.2024. Janusz Stankiewicz \u0026ldquo;Gorath. Krawędź Otchłani\u0026rdquo; (PL) 21.08.2024. Franciszek M. Piątkowski \u0026ldquo;Powiernik\u0026rdquo; (PL) 26.08.2024. Franciszek M. Piątkowski \u0026ldquo;Widzący\u0026rdquo; (PL) 28.08.2024. Franciszek M. Piątkowski \u0026ldquo;Obrońcy\u0026rdquo; (PL) 31.08.2024. Franciszek M. Piątkowski \u0026ldquo;Wybrani\u0026rdquo; (PL) 03.09.2024. Joanna W. Gajzler \u0026ldquo;Necrovet. Radiologia bytów nadprzyrodzonych\u0026rdquo; (PL) 06.09.2024. Seo Dongwon 달 드링크 서점 (EN:\u0026ldquo;The Bookstore For Drinking Moon Glow\u0026rdquo;, PL: \u0026ldquo;Księgarnia Napojów Księżyca\u0026rdquo;) 11.09.2024. Marcin Mortka \u0026ldquo;Ostrza Burzy\u0026rdquo; (PL) 16.09.2024. Przemysław Duda \u0026ldquo;Pięć pustych tronów\u0026rdquo; (PL) 23.09.2024. Przemysław Duda \u0026ldquo;Drogi donikąd\u0026rdquo; (PL) 30.09.2024. Przemysław Duda \u0026ldquo;Kompania cieni\u0026rdquo; (PL) 06.10.2024. Becky Chambers \u0026ldquo;A Prayer for the Crown-Shy\u0026rdquo; (PL: \u0026ldquo;Modlitwa za nieśmiałe korony drzew\u0026rdquo;) 10.10.2024. A. C. Cobble \u0026ldquo;Burning Tower\u0026rdquo; (PL: \u0026ldquo;Płonąca wieża\u0026rdquo;) 17.10.2024. A. C. Cobble \u0026ldquo;Weight of the Crown\u0026rdquo; (PL: \u0026ldquo;Ciężar korony\u0026rdquo;) 22.10.2024 Klaudia Gregorczyk \u0026ldquo;Piekielna narzeczona\u0026rdquo; (PL) 24.10.2024. (audiobook) Magdalena Salik \u0026ldquo;Wściek\u0026rdquo; (PL) 02.11.2024. Klaudia Gregorczyk \u0026ldquo;Piekielna królowa\u0026rdquo; (PL) (audiobook) 02.11.2024. Marcin Halski \u0026ldquo;Czas zaślepionych\u0026rdquo; (PL) 08.11.2024. Marcin Halski \u0026ldquo;Kres zaślepionych\u0026rdquo; (PL) 13.11.2024. Klaudia Gregorczyk \u0026ldquo;Piekielna żona\u0026rdquo; (PL) (audiobook) 14.11.2024. Phil Knight \u0026ldquo;Shoe Dog\u0026rdquo; (PL \u0026ldquo;Sztuka zwycięstwa. Wspomnienia twórcy Nike\u0026rdquo;) 21.11.2024. Brandon Sanderson, Steven Michael Bohls \u0026ldquo;Lux\u0026rdquo; (PL: \u0026ldquo;Lux\u0026rdquo;) (audiobook) 30.11.2024. Wojciech Dutka \u0026ldquo;Bractwo Mandylionu\u0026rdquo; (PL) 05.12.2024. Marcin Kowal \u0026ldquo;Stażysta. Tom 1\u0026rdquo; (PL) (audiobook) 07.12.2024. Peter A. Flannery \u0026ldquo;Decimus Fate and the Talisman of Dreams\u0026rdquo; (PL: \u0026ldquo;Decimus Fate i Talizman Marzeń\u0026rdquo;) 10.12.2023. Peter A. Flannery \u0026ldquo;Decimus Fate and the Butcher of Guile\u0026rdquo; (PL: \u0026ldquo;Decimus Fate i Rzeźnik z Guile\u0026rdquo;) 13.12.2024. Luiza Dobrzyńska \u0026ldquo;Pol Trek. Na dobre i na jeszcze gorsze\u0026rdquo; (PL) 19.12.2024. In progress, in the background:\nSzymon Drejewicz \u0026ldquo;Zrozumieć BPMN\u0026rdquo; (PL) 2023\rMarcin Mortka \u0026ldquo;Druga Burza\u0026rdquo; (PL) 07.01.2023. Marcin Mortka \u0026ldquo;Utracona Godzina\u0026rdquo; (PL) 12.01.2023. Marcin Mortka \u0026ldquo;Skrzynia pełna dusz\u0026rdquo; (PL) 18.01.2023. Brandon Sanderson \u0026ldquo;The Alloy of Law\u0026rdquo; (PL: \u0026ldquo;Stop prawa\u0026rdquo;) 23.01.2023. Brandon Sanderson \u0026ldquo;Shadows of Self\u0026rdquo; (PL: \u0026ldquo;Cienie tożsamości\u0026rdquo;) 02.02.2023. Brandon Sanderson \u0026ldquo;The Bands of Mourning\u0026rdquo; (PL: \u0026ldquo;Żałobne opaski\u0026rdquo;) 12.02.2023. Maria Peszek \u0026ldquo;Nakurwiam Zen\u0026rdquo; (PL) 15.02.2023. Brandon Sanderson \u0026ldquo;Mistborn: Secret Story\u0026rdquo; (PL: \u0026ldquo;Z mgły zrodzony: Tajna historia\u0026rdquo;) 16.02.2023. Brandon Sanderson \u0026ldquo;The Lost Metal\u0026rdquo; (PL: \u0026ldquo;Zaginiony metal\u0026rdquo;) 28.02.2023. Anna Sólyom \u0026ldquo;Neko Café\u0026rdquo; (PL: \u0026ldquo;Kocia kawiarnia\u0026rdquo;) 02.03.2023. Andrzej Dragan \u0026ldquo;Kwantechizm 2.0, czyli klatka na ludzi\u0026rdquo; (PL) 12.03.2023. Wojciech Dutka \u0026ldquo;Apostata\u0026rdquo; (PL) 22.03.2023. Yuval Noah Harari \u0026ldquo;Homo Deus: A Brief History of Tomorrow\u0026rdquo; (PL: \u0026ldquo;Homo Deus. Krótka historia jutra\u0026rdquo;) 16.04.2023. Remigiusz Mróz \u0026ldquo;Kabalista\u0026rdquo; (PL) 20.04.2023. Brandon Sanderson \u0026ldquo;Tress of the Emerald Sea\u0026rdquo; (PL: \u0026ldquo;Warkocz ze Szmaragdowego Morza\u0026rdquo;) 30.04.2023. S. A. Chakraborty \u0026ldquo;The City of Brass\u0026rdquo; (PL: \u0026ldquo;Miasto Mosiądzu\u0026rdquo;) 11.05.2023. Katarzyna Michalczak \u0026ldquo;Synu, jesteś kotem\u0026rdquo; (PL) 13.05.2022. Janek Świtała \u0026ldquo;Polski SOR\u0026rdquo; (PL) 21.05.2023. S. A. Chakraborty \u0026ldquo;The Kingdom of Copper\u0026rdquo; (PL: \u0026ldquo;Królestwo Miedzi\u0026rdquo;) 29.05.2023. S. A. Chakraborty \u0026ldquo;The Empire of Gold\u0026rdquo; (PL: \u0026ldquo;Imperium Złota\u0026rdquo;) 12.06.2023. Sanaka Hiiragi \u0026ldquo;人生写真館の奇跡\u0026rdquo; (EN: \u0026ldquo;The memory photographers aka The lantern of lost memories\u0026rdquo;, PL: \u0026ldquo;Fotograf utraconych wspomnień\u0026rdquo;) 13.06.2023. T. Kingfisher \u0026ldquo;Nettle \u0026amp; Bone\u0026rdquo; (PL: \u0026ldquo;Pokrzywa i kość\u0026rdquo;) 05.07.2023. Barbara Oakley \u0026ldquo;A Mind for Numbers: How to Excel at Math and Science (Even If You Flunked Algebra)\u0026rdquo; (PL: \u0026ldquo;Głowa do liczb\u0026rdquo;) 10.07.2023. Marcin Mortka \u0026ldquo;Widmowy zagon\u0026rdquo; (PL) 23.07.2023. Joanna W. Gajzler \u0026ldquo;Necrovet. Usługi weterynaryjno-nekromantyczne\u0026rdquo; (PL) 01.08.2023. Marcus Aurelius Τὰ εἰς ἑαυτόν (EN: \u0026ldquo;Meditations\u0026rdquo;, PL: \u0026ldquo;Rozmyślania (do siebie samego)\u0026rdquo;) 07.08.2023. Janusz Głowacki \u0026ldquo;Z głowy\u0026rdquo; (PL) 18.08.2023. Brandon Sanderson \u0026ldquo;The Frugal Wizard\u0026rsquo;s Handbook for Surviving Medieval England\u0026rdquo; (PL: \u0026ldquo;Oszczędnego czarodzieja poradnik przetrwania w średniowiecznej Anglii\u0026rdquo;) 21.08.2023. Janina Bąk \u0026ldquo;Statystycznie rzecz biorąc\u0026rdquo; (PL) 27.08.2023. Rafał Kosik \u0026ldquo;Mars\u0026rdquo; (PL) 01.09.2023. Agnieszka Kozak \u0026ldquo;Nastolatek potrzebuje wsparcia\u0026rdquo; (PL) 11.09.2023. Travis Baldree \u0026ldquo;Legends \u0026amp; Lattes. A Novel of High Fantasy and Low Stakes\u0026rdquo; (PL: \u0026ldquo;Legendy \u0026amp; Latte. Heroiczna opowieść o sprawach przyziemnych\u0026rdquo;) 16.09.2023. Mitchell Hogan \u0026ldquo;A Crucible of Souls\u0026rdquo; (PL: \u0026ldquo;Tygiel dusz\u0026rdquo;) 28.09.2023. Mitchell Hogan \u0026ldquo;Blood of Innocents\u0026rdquo; (PL: \u0026ldquo;Krew niewinnych\u0026rdquo;) 08.10.2023. Mitchell Hogan \u0026ldquo;A Shattered Empire\u0026rdquo; (PL: \u0026ldquo;Rozbite Imperium\u0026rdquo;) 23.10.2023. Brandon Sanderson \u0026ldquo;Yumi and the Nightmare Painter\u0026rdquo; (PL: \u0026ldquo;Yumi i malarz koszmarów\u0026rdquo;) 31.10.2023. Kamil Dziubka \u0026ldquo;Kulisy PiS\u0026rdquo; (PL) 05.11.2023. Remigiusz Mróz \u0026ldquo;Widmo Brockenu\u0026rdquo; (PL) 12.11.2023. Travis Baldree \u0026ldquo;Bookshops \u0026amp; Bonedust\u0026rdquo; (PL: \u0026ldquo;Księgarnie i Kościopył) 18.11.2023. David Goggins \u0026ldquo;Can\u0026rsquo;t Hurt Me: Master Your Mind and Defy the Odds\u0026rdquo; (PL: \u0026ldquo;Nic mnie nie złamie. Zapanuj nad swoim umysłem i pokonaj przeciwności losu\u0026rdquo;) 30.11.2023. Toshikazu Kawaguchi \u0026ldquo;Before the Coffee Gets Cold. Tales from the Cafe\u0026rdquo; (PL: \u0026ldquo;Zanim wystygnie kawa. Opowieści z kawiarni\u0026rdquo;) 03.12.2023. Rafał Kosik \u0026ldquo;Różaniec\u0026rdquo; (PL) 15.12.2023. Remigiusz Mróz \u0026ldquo;Langer\u0026rdquo; (PL) 23.12.2023. 2022\rAngus Watson \u0026ldquo;The Land You Never Leave\u0026rdquo; (PL: \u0026ldquo;Ziemi tej nie opuścisz\u0026rdquo;) 10.01.2022. Angus Watson \u0026ldquo;Where Gods Fear to Go\u0026rdquo; (PL: \u0026ldquo;Gdzie boją się iść bogowie\u0026rdquo;) 21.01.2022. Gregor Ziemer \u0026ldquo;Education for Death\u0026rdquo; (PL: \u0026ldquo;Jak wychować nazistę\u0026rdquo;) 25.01.2022. Michał Cetnarowski \u0026ldquo;Gnoza\u0026rdquo; (PL) 01.02.2022. Magdalena Salik \u0026ldquo;Płomień\u0026rdquo; (PL) 09.02.2022. Rebecca F. Kuang \u0026ldquo;The Poppy War\u0026rdquo; (PL: \u0026ldquo;Wojna makowa\u0026rdquo;) 19.02.2022. Rebecca F. Kuang \u0026ldquo;The Dragon Republic\u0026rdquo; (PL: \u0026ldquo;Republika Smoka\u0026rdquo;) 27.02.2022. Rebecca F. Kuang \u0026ldquo;The Burning God\u0026rdquo; (PL: \u0026ldquo;Płonący Bóg\u0026rdquo;) 13.03.2022. Tim Marshall \u0026ldquo;Prisoners of geography\u0026rdquo; (PL: \u0026ldquo;Więźniowie geografii\u0026rdquo;) 23.03.2022. Tim Marshall \u0026ldquo;The Power of Geography\u0026rdquo; (PL: \u0026ldquo;Potęga geografii\u0026rdquo;) 04.04.2022. Remigiusz Mróz \u0026ldquo;Projekt Riese\u0026rdquo; (PL) 08.04.2022. Jakub Ćwiek \u0026ldquo;Zawisza Czarny\u0026rdquo; (PL) 12.04.2022. Marcin Ogdowski \u0026ldquo;(Dez)informacja\u0026rdquo; (PL) 16.04.2022. Marcin Ogdowski \u0026ldquo;Stan wyjątkowy\u0026rdquo; (PL) 21.04.2022. Wojciech Dutka \u0026ldquo;Apokryf\u0026rdquo; (PL) 01.05.2022. Jonny Thomson \u0026ldquo;Mini Philosophy: A Small Book of Big Ideas\u0026rdquo; (PL: \u0026ldquo;Filozofia dla zabieganych. Mała książeczka o wielkich ideach.\u0026rdquo;) 04.05.2022. *) Jakub Ćwiek \u0026ldquo;Stróże\u0026rdquo; (PL) 08.05.2022. Jakub Ćwiek \u0026ldquo;Stróże. Brudnopis Boga\u0026rdquo; (PL) 13.05.2022. George S. Clason \u0026ldquo;The Richest Man in Babylon\u0026rdquo; (PL: \u0026ldquo;Najbogatszy człowiek w Babilonie\u0026rdquo;) 15.05.2022. Piotr Jacoń \u0026ldquo;My, Trans\u0026rdquo; (PL) 17.05.2022. Remigiusz Mróz \u0026ldquo;Skazanie\u0026rdquo; (PL) 21.05.2022. Yuval Noah Harari \u0026ldquo;Sapiens: A Brief History of Humankind\u0026rdquo; (PL: \u0026ldquo;Sapiens. Od zwierząt do bogów\u0026rdquo;) 05.06.2022. Richard Morgan \u0026ldquo;Altered carbon\u0026rdquo; (PL: \u0026ldquo;Modyfikowany węgiel\u0026rdquo;) 19.06.2022. Vaclav Smil \u0026ldquo;Numbers Don\u0026rsquo;t Lie. 71 Things You Need to Know About the World\u0026rdquo; (PL: \u0026ldquo;Liczby nie kłamią\u0026rdquo;) 28.06.2022. Aldous Huxley \u0026ldquo;Brave New World\u0026rdquo; (PL: \u0026ldquo;Nowy wspaniały świat\u0026rdquo;) 03.07.2022. Małgorzata Gołota \u0026ldquo;Żyletkę zawsze noszę przy sobie\u0026rdquo; (PL) 08.07.2022. Volker Busch \u0026ldquo;Kopf frei! Wie Sie Klarheit, Konzentration und Kreativität gewinnen\u0026rdquo; (PL: \u0026ldquo;Wolna głowa. Jak zadbać o swoją koncentrację i kreatywność\u0026rdquo;) 17.07.2022. Magdalena Kostyszyn \u0026ldquo;Też tak mam\u0026rdquo; (PL) 19.07.2022. Marcin Adamiec \u0026ldquo;Zniknięty ksiądz. Moja historia\u0026rdquo; (PL) 22.07.2022. Bogusław Polch, Arnold Mostowicz, Alfred Górny \u0026ldquo;Ekspedycja\u0026rdquo; (PL, comics) 24.07.2022 Piotr Prokopowicz, Sebastian Drzewiecki \u0026ldquo;Lider wystarczająco dobry\u0026rdquo; (PL) 01.08.2022. Kel Kade \u0026ldquo;Free The Darkness\u0026rdquo; (PL: \u0026ldquo;Powiernik Mieczy\u0026rdquo;) 06.08.2022. Kel Kade \u0026ldquo;Reign of Madness\u0026rdquo; (PL: \u0026ldquo;Królestwo obłędu\u0026rdquo;) 13.08.2022. Kel Kade \u0026ldquo;Legends of Ahn\u0026rdquo; (PL: \u0026ldquo;Legendy Ahn\u0026rdquo;) 21.08.2022 Kel Kade \u0026ldquo;Kingdoms And Chaos\u0026rdquo; (PL: \u0026ldquo;Królestwa i Chaos\u0026rdquo;) 28.08.2022. Marcin Mortka \u0026ldquo;Nie ma tego złego\u0026rdquo; (PL) 02.09.2022. Marcin Mortka \u0026ldquo;Głodna Puszcza\u0026rdquo; (PL) 06.09.2022. Robert Iger \u0026ldquo;The Ride of a Lifetime: Lessons Learned from 15 Years as CEO of the Walt Disney Company\u0026rdquo; (PL: \u0026ldquo;Przejażdżka życia. Czego nauczyłem się jako CEO The Walt Disney Company\u0026rdquo;) 12.09.2022. Anthony Ryan \u0026ldquo;Wolf\u0026rsquo;s Call\u0026rdquo; (PL: \u0026ldquo;Zew wilka\u0026rdquo;) 22.09.2022. Marcin Mortka \u0026ldquo;Przed wyruszeniem w drogę\u0026rdquo; (PL) 23.09.2022. Robert Cialdini \u0026ldquo;Influence (Principles of Ethical Influence)\u0026rdquo; (PL: \u0026ldquo;Zasady wywierania wpływu na ludzi\u0026rdquo;) 25.09.2022. Anna Nieznaj (PL) \u0026ldquo;Dolina niesamowitości\u0026rdquo; 02.10.2022. Anthony Ryan \u0026ldquo;Black Song\u0026rdquo; (PL: \u0026ldquo;Czarna Pieśń\u0026rdquo;) 10.10.2022. Greg McKeown \u0026ldquo;Essentialism: The Disciplined Pursuit of Less\u0026rdquo; (PL: \u0026ldquo;Esencjalista: mniej, ale lepiej\u0026rdquo;) 18.10.2022. Artur Nowak, Stanisław Obirek \u0026ldquo;Babilon\u0026rdquo; (PL) 23.10.2022. Frédéric Martel \u0026ldquo;In the Closet of the Vatican: Power, Homosexuality, Hypocrisy\u0026rdquo; (PL: \u0026ldquo;Sodoma. Hipokryzja i władza w Watykanie\u0026rdquo;, FR: \u0026ldquo;Sodoma : Enquête au cœur du Vatican\u0026rdquo;) 10.11.2022. Marcin Łokciewicz \u0026ldquo;Zamek rządzi\u0026rdquo; (PL) 11.11.2022. Anna Kurek \u0026ldquo;Szczęśliwy jak łosoś\u0026rdquo; (PL) 17.11.2022. Witold Jurasz \u0026ldquo;Demony Rosji\u0026rdquo; (PL) 21.11.2022. James Clear \u0026ldquo;Atomic Habits\u0026rdquo; (PL: \u0026ldquo;Atomowe nawyki\u0026rdquo;) 29.11.2022. Grzegorz Gajek \u0026ldquo;Piast\u0026rdquo; (PL) 08.12.2022. Tomasz Awłasewicz \u0026ldquo;Niewidzialni. Największa tajemnica służb specjalnych PRL\u0026rdquo; (PL) 13.12.2022. Marek S. Huberath \u0026ldquo;Druga podobizna w alabastrze\u0026rdquo; (PL) 14.12.2022. Toshikazu Kawaguchi \u0026ldquo;Before the Coffee Gets Cold\u0026rdquo; (PL: \u0026ldquo;Zanim wystygnie kawa\u0026rdquo;, JP: コーヒーが冷めないうちに \u0026ldquo;Kohi ga Samenai Uchi ni\u0026rdquo;) 17.12.2022. Marcin Mortka \u0026ldquo;Mroźny szlak\u0026rdquo; (PL) 25.12.2022. Marcin Mortka \u0026ldquo;Martwe Jezioro\u0026rdquo; (PL) 30.12.2022. *) I\u0026rsquo;ve read this book one chapter a day\n2021\rPernille Ripp \u0026ldquo;Passionate Learners: How to Engage and Empower Your Students\u0026rdquo; (PL: \u0026ldquo;Uczyć (się) z pasją. Jak sprawić, by uczenie (się) było fascynującą podróżą\u0026rdquo;) 06.01.2021. Sara Pennypacker \u0026ldquo;Pax\u0026rdquo; 13.01.2021. Hania Sywula \u0026ldquo;W głowie się poprzewracało\u0026rdquo; (PL) 18.01.2021. Remigiusz Mróz \u0026ldquo;Halny\u0026rdquo; (PL) 24.01.2021. Cixin Liu \u0026ldquo;Three body problem\u0026rdquo; (PL: \u0026ldquo;Problem trzech ciał\u0026rdquo;) 13.02.2021. Cixin Liu \u0026ldquo;The Dark Forest\u0026rdquo; (PL: \u0026ldquo;Ciemny Las\u0026rdquo;) 05.03.2021. Cixin Liu \u0026ldquo;Death\u0026rsquo;s End\u0026rdquo; (PL: \u0026ldquo;Koniec Śmierci) 22.03.2021. Anders Hansen \u0026ldquo;Skärmhjärnan\u0026rdquo; (PL: \u0026ldquo;Wyloguj swój mózg\u0026rdquo;) 28.03.2021. Brandon Sanderson \u0026ldquo;Snapshot\u0026rdquo; (PL: \u0026ldquo;Migawka\u0026rdquo;) 30.03.2021. Anthony Ryan \u0026ldquo;Blood song\u0026rdquo; (PL: \u0026ldquo;Pieśń krwi\u0026rdquo;) 06.04.2021. Anthony Ryan \u0026ldquo;Tower Lord\u0026rdquo; (PL: \u0026ldquo;Lord wieży\u0026rdquo;) 17.04.2021. Anthony Ryan \u0026ldquo;Queen of Fire\u0026rdquo; (PL: \u0026ldquo;Królowa Ognia\u0026rdquo;) 05.05.2021. Irving Stone \u0026ldquo;The Agony and the Ecstasy\u0026rdquo; (PL: \u0026ldquo;Udręka i Ekstaza\u0026rdquo;) 31.05.2021. Walter Isaacson \u0026ldquo;Leonardo da Vinci. Biography\u0026rdquo; (PL: \u0026ldquo;Leonardo da Vinci\u0026rdquo;) 22.06.2021. Thomas R. Martin \u0026ldquo;Ancient Greece. From Prehistoric to Hellenistic Times\u0026rdquo; (PL: \u0026ldquo;Starożytna Grecja. Od prehistorii do czasów hellenistycznych\u0026rdquo;) 08.07.2021. Lloyd C. Douglas \u0026ldquo;The Robe\u0026rdquo; (PL: \u0026ldquo;Szata\u0026rdquo;) 24.07.2021. Remigiusz Mróz \u0026ldquo;Umorzenie\u0026rdquo; (PL) 28.07.2021. Remigiusz Mróz \u0026ldquo;Wyrok\u0026rdquo; (PL) 30.07.2021. Jakub Morawiec \u0026ldquo;Początki Państw - Dania\u0026rdquo; (PL) 09.08.2021. Remigiusz Mróz \u0026ldquo;Ekstradycja\u0026rdquo; (PL) 12.08.2021. Remigiusz Mróz \u0026ldquo;Precedens\u0026rdquo; (PL) 13.08.2021. Remigiusz Mróz \u0026ldquo;Afekt\u0026rdquo; (PL) 17.08.2021. Remigiusz Mróz \u0026ldquo;Ekstremista\u0026rdquo; (PL) 21.08.2021. Stanisław Lem \u0026ldquo;Solaris\u0026rdquo; (PL) 30.08.2021. Brent Weeks \u0026ldquo;Black Prism (PL: \u0026ldquo;Czarny Pryzmat\u0026rdquo;) 17.09.2021. Brent Weeks \u0026ldquo;The Blinding Knife\u0026rdquo; (PL: \u0026ldquo;Oślepiający nóż\u0026rdquo;) 03.10.2021. Brent Weeks \u0026ldquo;Broken Eye\u0026rdquo; (PL: \u0026ldquo;Okaleczone oko\u0026rdquo;) 19.10.2021. Brent Weeks \u0026ldquo;Blood Mirror\u0026rdquo; (PL: \u0026ldquo;Krwawe zwierciadło\u0026rdquo;) 31.10.2021. Brent Weeks \u0026ldquo;Burning White\u0026rdquo; (PL: \u0026ldquo;Gorejąca biel\u0026rdquo;) 11.11.2021. (part 1), 22.11.2021. (part 2) Artur Nowak, Stanisław Obirek \u0026ldquo;Gomora\u0026rdquo; (PL) 05.12.2021. Piotr C. \u0026ldquo;Pokolenie IKEA\u0026rdquo; (PL) 08.12.2021. Remigiusz Mróz \u0026ldquo;Przepaść\u0026rdquo; (PL) 14.12.2021. Remigiusz Mróz \u0026ldquo;Egzekucja\u0026rdquo; (PL) 19.12.2021. Angus Watson \u0026ldquo;You Die When You Die\u0026rdquo; (PL: \u0026ldquo;Umrzesz kiedy umrzesz\u0026rdquo;) 30.12.2021. 2020\rMaja Lidia Kossakowska \u0026ldquo;Bramy Światłości, tom 1\u0026rdquo; (PL) 08.01.2020. Maja Lidia Kossakowska \u0026ldquo;Bramy Światłości, tom 2\u0026rdquo; (PL) 14.01.2020. Elisabeth Åsbrink \u0026ldquo;Orden som formade Sverige\u0026rdquo; (PL: \u0026ldquo;Made in Sweden. 60 słów, które stworzyły naród\u0026rdquo;) 18.01.2020. Maja Lidia Kossakowska \u0026ldquo;Bramy Światłości, tom 3\u0026rdquo; (PL) 22.01.2020. Andrzej Ziemiański, \u0026ldquo;Virion. Wyrocznia\u0026rdquo; (PL) 29.01.2020. Andrzej Ziemiański, \u0026ldquo;Virion. Obława\u0026rdquo; (PL) 01.02.2020. Andrzej Ziemiański, \u0026ldquo;Virion. Adept\u0026rdquo; (PL) 06.02.2020. Andrzej Ziemiański, \u0026ldquo;Virion. Szermierz\u0026rdquo; (PL) 11.02.2020. Edward Snowden \u0026ldquo;Permanent Record\u0026rdquo; (PL: \u0026ldquo;Pamięć nieulotna\u0026rdquo;) 19.02.2020. Jo Nesbø \u0026ldquo;Kniv\u0026rdquo; (PL: \u0026ldquo;Nóż\u0026rdquo;) 25.02.2020. Andrzej Sapkowski \u0026ldquo;The Last Wish\u0026rdquo; (PL: \u0026ldquo;Ostatnie życzenie\u0026rdquo;) 01.03.2020. Andrzej Sapkowski \u0026ldquo;Sword of Destiny\u0026rdquo; (PL: \u0026ldquo;Miecz przeznaczenia\u0026rdquo;) 04.03.2020. Andrzej Sapkowski \u0026ldquo;Blood of Elves\u0026rdquo; (PL: \u0026ldquo;Krew Elfów\u0026rdquo;) 09.03.2020. Andrzej Sapkowski \u0026ldquo;Time of Contempt\u0026rdquo; (PL: \u0026ldquo;Czas Pogardy\u0026rdquo;) 21.03.2020. Kornelia Orwat \u0026ldquo;Jak być mamą w edukacji domowej i nie (dać się) zwariować\u0026rdquo; (PL) 30.03.2020. Andrzej Sapkowski \u0026ldquo;Baptism of Fire\u0026rdquo; (PL: \u0026ldquo;Chrzest Ognia\u0026rdquo;) 31.03.2020. Andrzej Sapkowski \u0026ldquo;The Tower of Swallow\u0026rdquo; (PL: \u0026ldquo;Wieża Jaskółki\u0026rdquo;) 11.04.2020. Andrzej Sapkowski \u0026ldquo;Lady of the Lake\u0026rdquo; (PL: \u0026ldquo;Pani Jeziora\u0026rdquo;) 21.04.2020. Katarzyna Berenika Miszczuk \u0026ldquo;Szeptucha\u0026rdquo; (PL) 27.04.2020. Andrzej Sapkowski \u0026ldquo;Season of Storms\u0026rdquo; (PL: \u0026ldquo;Sezon burz\u0026rdquo;) 04.05.2020. Katarzyna Berenika Miszczuk \u0026ldquo;Noc Kupały\u0026rdquo; (PL) 10.05.2020. John Gwynne \u0026ldquo;Wrath\u0026rdquo; (PL: \u0026ldquo;Gniew\u0026rdquo;) 20.05.2020. Marcin Wicha \u0026ldquo;Wielka księga Klary\u0026rdquo; (PL) 20.05.2020. Jerome K. Jerome \u0026ldquo;Three Men in a Boat (To Say Nothing of the Dog)\u0026rdquo; (PL: \u0026ldquo;Trzech panów w łódce (nie licząc psa)\u0026rdquo;) 30.05.2020. Gene Wolfe \u0026ldquo;The Shadow of the Torturer. The Claw of the Conciliator\u0026rdquo; (PL: \u0026ldquo;Cień i Pazur. Cień Kata. Pazur Łagodziciela\u0026rdquo;) 29.06.2020. Katarzyna Berenika Miszczuk \u0026ldquo;Żerca\u0026rdquo; (PL) 10.07.2020. Katarzyna Berenika Miszczuk \u0026ldquo;Przesilenie\u0026rdquo; (PL) 16.07.2020. Zygmunt Miłoszewski \u0026ldquo;Kwestia ceny\u0026rdquo; (PL) 29.07.2020. Mateusz Grzesiak \u0026ldquo;Alpha Human\u0026rdquo; (PL) 17.08.2020. Hans Rosling, Ola Rosling, Anna Rosling Rönnlund \u0026ldquo;Factfulness: Ten Reasons We\u0026rsquo;re Wrong About the World - and Why Things Are Better Than You Think\u0026rdquo; (PL: \u0026ldquo;Factfulness. Dlaczego Świat Jest Lepszy Niż Myślimy Czyli Jak Stereotypy Zastąpić Realną Wiedzą\u0026rdquo;) 31.08.2020. Mikołaj Marcela \u0026ldquo;Jak nie zwariować ze swoim dzieckiem. Edukacja, w której dzieci same chcą się uczyć i rozwijać\u0026rdquo; (PL) 13.09.2020. Gene Wolfe \u0026ldquo;The Sword of the Lictor. Citadel of the Autarch\u0026rdquo; (PL: \u0026ldquo;Miecz i Cytadela\u0026rdquo;. Miecz Liktora. Cytadela Autarchy) 10.10.2020. Chris Lowney \u0026ldquo;Heroic leadership. Best Practices from a 450-year-old Company that Changed the World\u0026rdquo; (PL: \u0026ldquo;Heroiczne przywództwo. Tajemnice sukcesu firmy istniejącej ponad 450 lat\u0026rdquo;) 29.10.2020. Ryan Holiday, Stephen Hanselman \u0026ldquo;The Daily Stoic Journal: 366 Days of Writing and Reflection on the Art of Living\u0026rdquo; (PL: \u0026ldquo;Stoicyzm na każdy dzień roku. 366 medytacji na temat mądrości, wytrwałości i sztuki życia\u0026rdquo;) 29.10.2020. *) Sebastien de Castell \u0026ldquo;Tyrant’s Throne\u0026rdquo; (PL: \u0026ldquo;Tron Tyrana\u0026rdquo;) 15.11.2020. Margaret Atwood \u0026ldquo;Handmaid\u0026rsquo;s Tale\u0026rdquo; (PL: \u0026ldquo;Opowieść podręcznej\u0026rdquo;) 29.11.2020. Radosław Kotarski \u0026ldquo;Inaczej\u0026rdquo; (PL) 15.12.2020. Marcin Ciszewski \u0026ldquo;Deszcz\u0026rdquo; (PL) 30.12.2020. *) I\u0026rsquo;ve read this book one page a day\n2019\rGarth Nix \u0026ldquo;Sabriel\u0026rdquo; 08.01.2019. Garth Nix \u0026ldquo;Lirael\u0026rdquo; 05.02.2019. Garth Nix \u0026ldquo;Abhorsen\u0026rdquo; 22.02.2019. Michael J. Sullivan \u0026ldquo;The Crown Tower\u0026rdquo; (PL: \u0026ldquo;Wieża Koronna\u0026rdquo;) 04.03.2019. Michael J. Sullivan \u0026ldquo;The Rose and the Thorn\u0026rdquo; (PL: \u0026ldquo;Róża i cierń\u0026rdquo;) 09.03.2019. Michael J. Sullivan \u0026ldquo;The death of Dulgath\u0026rdquo; (PL: \u0026ldquo;Śmierć Dulgath\u0026rdquo;) 15.03.2019. Michael J. Sullivan \u0026ldquo;The Crown Conspiracy; Avempartha\u0026rdquo; (PL: \u0026ldquo;Królewska krew. Wieża elfów\u0026rdquo;) 23.03.2019. Michael J. Sullivan \u0026ldquo;Nyphron Rising; The Emerald Storm\u0026rdquo; (PL: \u0026ldquo;Nowe imperium. Szmaragdowy sztorm\u0026rdquo;) 28.03.2019. Michael J Sullivan \u0026ldquo;Wintertide\u0026rdquo; (PL: \u0026ldquo;Zdradziecki plan\u0026rdquo;) 29.03.2019. Michael J Sullivan \u0026ldquo;Percepliquis\u0026rdquo; (PL: \u0026ldquo;Pradawna stolica\u0026rdquo;) 03.04.2019. Mark Lawrence \u0026ldquo;Prince of Thorns\u0026rdquo; (PL: \u0026ldquo;Książę cierni\u0026rdquo;) 09.04.2019. Mark Lawrence \u0026ldquo;King of Thorns\u0026rdquo; (PL: \u0026ldquo;Król cierni\u0026rdquo;) 18.04.2019. Mark Lawrence \u0026ldquo;Emperor of Thorns\u0026rdquo; (PL: \u0026ldquo;Cesarz cierni\u0026rdquo;) 04.05.2019. Mark Lawrence \u0026ldquo;Prince of Fools\u0026rdquo; (PL: \u0026ldquo;Książę głupców\u0026rdquo;) 28.05.2019. Mark Lawrence \u0026ldquo;The Liar\u0026rsquo;s Key\u0026rdquo; (PL: \u0026ldquo;Klucz kłamcy\u0026rdquo;) 20.06.2019. Garth Nix \u0026ldquo;Clariel\u0026rdquo; 27.06.2019. Radek Kotarski \u0026ldquo;Włam się do mózgu\u0026rdquo; (PL) 06.07.2019. Mark Lawrence \u0026ldquo;The Wheel of Osheim\u0026rdquo; (PL: \u0026ldquo;Koło Osheim\u0026rdquo;) 02.08.2019. Brandon Sanderson \u0026ldquo;Rithmatist\u0026rdquo; (PL: \u0026ldquo;Rytmatysta\u0026rdquo;) 07.08.2019. Eliyahu M. Goldratt, Jeff Cox\u0026rdquo; The Goal: A Process of Ongoing Improvement\u0026rdquo; (PL: \u0026ldquo;Cel I: Doskonałość w produkcji\u0026rdquo;) 11.08.2019. Sebastien de Castell \u0026ldquo;Traitor\u0026rsquo;s Blade\u0026rdquo; (PL: \u0026ldquo;Ostrze Zdrajcy\u0026rdquo;) 21.08.2019. Sebastien de Castell \u0026ldquo;Knight\u0026rsquo;s Shadow\u0026rdquo; (PL: \u0026ldquo;Cień Rycerza\u0026rdquo;) 28.08.2019. Sebastien de Castell \u0026ldquo;Saint\u0026rsquo;s Blood\u0026rdquo; (PL: \u0026ldquo;Krew Świętego\u0026rdquo;) 29.08.2019. Sylwia Królikowska \u0026ldquo;7 wyzwań lidera\u0026rdquo; (PL) 05.09.2019. Michael J Sullivan \u0026ldquo;The Disappearance of Winter\u0026rsquo;s Daughter\u0026rdquo; (PL: \u0026ldquo;Zniknięcie Córki Wintera\u0026rdquo;) 13.09.2019. Wojtek Miłoszewski \u0026ldquo;Kontra\u0026rdquo; (PL) 18.09.2019. Sylwia Królikowska \u0026ldquo;Model P.R.O.M.O.T.E. Awansuj szybciej, zarabiaj więcej\u0026rdquo; (PL) 03.10.2019. Eliyahu M. Goldratt \u0026ldquo;It\u0026rsquo;s Not Luck\u0026rdquo; (PL: \u0026ldquo;Cel II. To nie przypadek\u0026rdquo;) 11.10.2019. Robert M. Wegner \u0026ldquo;Każde martwe marzenie\u0026rdquo; (PL) 15.11.2019. John Gwynne \u0026ldquo;Malice\u0026rdquo; (PL: \u0026ldquo;Zawiść\u0026rdquo;) 12.12.2019. John Gwynne \u0026ldquo;Valour\u0026rdquo; (PL: \u0026ldquo;Męstwo\u0026rdquo;) 19.12.2019. John Gwynne \u0026ldquo;Ruin\u0026rdquo; (PL: \u0026ldquo;Zgliszcza\u0026rdquo;) 31.12.2019. 2018\rZygmunt Miłoszewski \u0026ldquo;Jak Zawsze\u0026rdquo; (PL) Angus Watson \u0026ldquo;Age of Iron\u0026rdquo; (PL: \u0026ldquo;Czas żelaza\u0026rdquo;) Angus Watson \u0026ldquo;Clash of Iron\u0026rdquo; (PL: Żelazna Wojna\u0026rdquo;) Angus Watson \u0026ldquo;Reign of Iron\u0026rdquo; (PL: \u0026ldquo;Tron z Żelaza\u0026rdquo;) 16.02.2018. Zoran Krušvar \u0026ldquo;Izvršitelji nauma Gospodnjeg\u0026rdquo; (PL: \u0026ldquo;Wykonawcy Bożego Zamysłu\u0026rdquo;) 02.03.2018. Jo Nesbø \u0026ldquo;The thirst\u0026rdquo; (PL: \u0026ldquo;Pragnienie\u0026rdquo;) 09.03.2018. Timothy D. Walker \u0026ldquo;Teach Like Finland\u0026rdquo; (PL: \u0026ldquo;Fińskie dzieci uczą się najlepiej\u0026rdquo;) 29.03.2018. Neal Shusterman \u0026ldquo;Scythe\u0026rdquo; (PL: \u0026ldquo;Kosiarze\u0026rdquo;) 11.04.2018. Remigiusz Mróz \u0026ldquo;Testament\u0026rdquo; (PL) 16.04.2018. Remigiusz Mróz \u0026ldquo;Behawiorysta\u0026rdquo; (PL), 24.04.2018. Brandon Sanderson \u0026ldquo;The Way Of Kings\u0026rdquo; (PL: \u0026ldquo;Droga Królów\u0026rdquo;) 25.05.2018. Brandon Sanderson \u0026ldquo;Words of Radiance\u0026rdquo; (PL: \u0026ldquo;Słowa Światłości\u0026rdquo;) 09.06.2018. Brandon Sanderson \u0026ldquo;Oathbringer\u0026rdquo; (PL: \u0026ldquo;Dawca Przysięgi\u0026rdquo;) 03.07.2018. + \u0026ldquo;Edgedancer\u0026rdquo; (PL: \u0026ldquo;Tancerka Krawędzi\u0026rdquo;) Remigiusz Mróz \u0026ldquo;Zerwa\u0026rdquo; (PL) 06.07.2018. Wojciech Chmielarz \u0026ldquo;Żmijowisko\u0026rdquo; (PL) 12.07.2018. Neal Shusterman \u0026ldquo;Thunderhead\u0026rdquo; (PL: \u0026ldquo;Kosodom\u0026rdquo;) 18.07.2018. Peter V. Brett \u0026ldquo;The Core\u0026rdquo; (PL: \u0026ldquo;Otchłań\u0026rdquo;) 29.07.2018. Marcin Ciszewski \u0026ldquo;Mgła\u0026rdquo; (PL) 01.08.2018. Remigiusz Mróz \u0026ldquo;Hashtag\u0026rdquo; (PL) 03.08.2018. Katarzyna Bonda \u0026ldquo;Pochłaniacz\u0026rdquo; (PL) 10.08.2018. Katarzyna Bonda \u0026ldquo;Okularnik\u0026rdquo; (PL) 18.08.2018. Katarzyna Bonda \u0026ldquo;Lampiony\u0026rdquo; (PL) 23.08.2018. Katarzyna Bonda \u0026ldquo;Czerwony Pająk\u0026rdquo; (PL) 05.09.2018. Camilla Läckberg \u0026ldquo;The Witch/The Girl in The Woods\u0026rdquo; (PL: \u0026ldquo;Czarownica\u0026rdquo;) 14.09.2018. Rosemary Sutcliff \u0026ldquo;The Eagle of the Ninth\u0026rdquo; (PL: \u0026ldquo;Dziewiąty legion\u0026rdquo;) 19.09.2018. Remigiusz Mróz \u0026ldquo;Kontratyp\u0026rdquo; (PL) 28.09.2018. Jo Nesbø \u0026ldquo;Macbeth\u0026rdquo; (PL: \u0026ldquo;Makbet\u0026rdquo;) 11.10.2018. Danielle Trussoni \u0026ldquo;Angelology\u0026rdquo; (PL: \u0026ldquo;Angelologia\u0026rdquo;) 07.11.2018. Douglas Hulick \u0026ldquo;Among Thieves\u0026rdquo; (PL: \u0026ldquo;Honor złodzieja\u0026rdquo;) 19.11.2018. Jo Nesbø \u0026ldquo;Blood on snow\u0026rdquo; (PL: \u0026ldquo;Krew na śniegu\u0026rdquo;) 22.11.2018. Jo Nesbø \u0026ldquo;More blood\u0026rdquo; (PL: \u0026ldquo;Więcej krwi\u0026rdquo;) 27.11.2018. Douglas Hulick \u0026ldquo;Sworn in steel\u0026rdquo; (PL: \u0026ldquo;Przysięga stali\u0026rdquo;) 03.12.2018. Jeppe Hedaa \u0026ldquo;Nucleon\u0026rdquo; 12.12.2018. Dan Elloway \u0026ldquo;Xenophobe\u0026rsquo;s guide to the Norwegians\u0026rdquo; 18.12.2018. Books #33 (\u0026quot;Nucleon\u0026quot; by @JeppeHedaa) and #34 (\u0026quot;Xenophobe\u0026#39;s guide to the Norwegians\u0026quot; by @danelloway) in 2018 - finished. Both were interesting reading.\n\u0026mdash; Bartosz Ratajczyk (he/him) (@b_ratajczyk) December 19, 2018 Finished reading books #30 (Jo Nesbo, \u0026quot;More blood\u0026quot;, 27.11.) and #31 in 2018 (@DougHulick \u0026quot;Sworn in steel\u0026quot;, 03.12.) It was a great journey with Drothe. Thank you, Douglas!\n\u0026mdash; Bartosz Ratajczyk (he/him) (@b_ratajczyk) December 3, 2018 Finished reading book #30 in 2018 - \u0026quot;Blood on snow\u0026quot; (Krew na śniegu) by Jo Nesbo. Recommended, as all his books.\n\u0026mdash; Bartosz Ratajczyk (he/him) (@b_ratajczyk) November 22, 2018 Finished reading book #29 in 2018 - \u0026quot;Among Thieves\u0026quot; by @DougHulick Excellent book. Highly recommended!\n\u0026mdash; Bartosz Ratajczyk (he/him) (@b_ratajczyk) November 19, 2018 Yesterday finished reading book #28 in 2018 - Angelology by @DaniTrussoni\n\u0026mdash; Bartosz Ratajczyk (he/him) (@b_ratajczyk) November 8, 2018 Finished reading book #27 in 2018 - \u0026quot;Macbeth\u0026quot; by #JoNesbo If you liked Sin City, you will also like his version of Shakespeares Macbeth\n\u0026mdash; Bartosz Ratajczyk (he/him) (@b_ratajczyk) October 11, 2018 Finished reading book #26 in 2018. \u0026quot;Kontratyp\u0026quot; by @remigiuszmroz\n\u0026mdash; Bartosz Ratajczyk (he/him) (@b_ratajczyk) September 28, 2018 Finished reading book #25 in 2018: The Eagle of the Ninth by Rosemary Sutcliff. It\u0026#39;s for young people, but I enjoyed it. It was nice to read about an adventure, friendship, a story without implied meaning\n\u0026mdash; Bartosz Ratajczyk (he/him) (@b_ratajczyk) September 19, 2018 Finished reading book #24 in 2018 - ”Häxan” (The Girl in the Woods) by @LackbergNews It was nice to return to Fjällbacka\n\u0026mdash; Bartosz Ratajczyk (he/him) (@b_ratajczyk) September 14, 2018 Yesterday finished reading book #23 in 2018 - \u0026quot;Czerwony pająk\u0026quot; by @BondaKatarzyna - the last in Sasza Załuska series.\n\u0026mdash; Bartosz Ratajczyk (he/him) (@b_ratajczyk) September 6, 2018 Yesterday finished reading book #22 in 2018 - \u0026quot;Lampiony\u0026quot; by @BondaKatarzyna - 3rd part of Sasza Załuska series. Recommended reading. Now, to the 4th part - \u0026quot;Czerwony pająk\u0026quot;\n\u0026mdash; Bartosz Ratajczyk (he/him) (@b_ratajczyk) August 24, 2018 Yesterday finished reading book #21 in 2018. \u0026quot;Okularnik\u0026quot; by @BondaKatarzyna Highly recommended. Now reading \u0026quot;Lampiony\u0026quot;, part three of Sasza Załuska series.\n\u0026mdash; Bartosz Ratajczyk (he/him) (@b_ratajczyk) August 19, 2018 Yesterday finished reading book #20 in 2018, \u0026quot;Pochłaniacz\u0026quot; by @BondaKatarzyna And immediately started \u0026quot;Okularnik\u0026quot;, the 2nd part of the series with Sasza Załuska. I wasn\u0026#39;t convinced at the beginning, but then it was hard to put the book down\n\u0026mdash; Bartosz Ratajczyk (he/him) (@b_ratajczyk) August 11, 2018 Today finished reading book #19 in 2018 - \u0026quot;Hashtag\u0026quot; by @remigiuszmroz I have mixed feelings about the book.\n\u0026mdash; Bartosz Ratajczyk (he/him) (@b_ratajczyk) August 3, 2018 Yesterday finished reading \u0026quot;Mgła\u0026quot; by @MarcinCiszewsk1 Another great book by Marcin. When \u0026quot;Deszcz\u0026quot; is planned to be published?\n\u0026mdash; Bartosz Ratajczyk (he/him) (@b_ratajczyk) August 2, 2018 Finisthed reading book #17 in 2018 - \u0026quot;The Core\u0026quot; by @PVBrett Already waiting for \u0026quot;Barren\u0026quot;, to go back to Thesa\n\u0026mdash; Bartosz Ratajczyk (he/him) (@b_ratajczyk) July 29, 2018 #16, forgot to write down the previous one (Żmijowisko) to my list\n\u0026mdash; Bartosz Ratajczyk (he/him) (@b_ratajczyk) July 18, 2018 Finished reading book #15 in 2018. \u0026quot;Żmijowisko\u0026quot; by Wojciech Chmielarz. I will definitely buy more of his books\n\u0026mdash; Bartosz Ratajczyk (he/him) (@b_ratajczyk) July 12, 2018 Finished reading book #14 in 2018. \u0026quot;Zerwa\u0026quot; by @remigiuszmroz - the fifth part of the series with Wiktor Forst. The best book by Remigiusz. Great reading.\n\u0026mdash; Bartosz Ratajczyk (he/him) (@b_ratajczyk) July 6, 2018 Finished reading book #13 in 2018, \u0026quot;Oathbringer\u0026quot; by @BrandSanderson Too bad, that the 4th book of The Stormlight Archive is not until 2020/2021\n\u0026mdash; Bartosz Ratajczyk (he/him) (@b_ratajczyk) July 3, 2018 Finished reading book #12 in 2018. \u0026quot;Words of Radiance\u0026quot; by @BrandSanderson Another great story. Next: Edgedancer and Oathbringer!\n\u0026mdash; Bartosz Ratajczyk (he/him) (@b_ratajczyk) June 9, 2018 Forgot to write down - last friday (25.05) I fished reading \u0026quot;The Way of Kings\u0026quot; by @BrandSanderson First part of the Stormlight Archive series. ~1000 pages and I want more! Immediately started \u0026quot;Words of Radiance\u0026quot;, the second part (also ~1000 pages)\n\u0026mdash; Bartosz Ratajczyk (he/him) (@b_ratajczyk) May 31, 2018 Finished reading book #10 in 2018. \u0026quot;Behawiorysta\u0026quot; by @remigiuszmroz For me - the best book by Remigiusz (so far). Now waiting for the fifth part of Forst series.\n\u0026mdash; Bartosz Ratajczyk (he/him) (@b_ratajczyk) April 20, 2018 Finished reading book #9 in 2018. \u0026quot;Testament\u0026quot; by @remigiuszmroz Another great lecture and starting reading next one - Behawiorysta.\n\u0026mdash; Bartosz Ratajczyk (he/him) (@b_ratajczyk) April 16, 2018 Yesterday I finished reading book #8 in 2018. \u0026quot;Scythe\u0026quot; (Kosiarze) by @NealShusterman Interesting idea of death in immortal world. Really enjoyed and waiting for the translation of \u0026quot;Thunderhead\u0026quot;\n\u0026mdash; Bartosz Ratajczyk (he/him) (@b_ratajczyk) April 12, 2018 Yesterday finished reading book #7 in 2018. \u0026quot;Teach Like Finland\u0026quot; by @timdwalk Interesting to see the finnish education. Thanks! https://t.co/0BV4OXlzl8\n\u0026mdash; Bartosz Ratajczyk (he/him) (@b_ratajczyk) March 30, 2018 Finished reading book #6 in 2018. \u0026quot;The thirst\u0026quot; by #JoNesbo It was hard to put the book away. Great stuff, as always\n\u0026mdash; Bartosz Ratajczyk (he/him) (@b_ratajczyk) March 9, 2018 Yesterday finished reading book #5 in 2018, \u0026quot;Izvršitelji nauma Gospodnjeg\u0026quot; (Wykonawcy Bożego Zamysłu, PL) by @zorankrusvar Not exactly my kind of literature but enjoyed some parts of the book (the medieval part especially)\n\u0026mdash; Bartosz Ratajczyk (he/him) (@b_ratajczyk) March 3, 2018 Finished reading book #4 in 2018: \u0026quot;Reign of Iron\u0026quot; by @GusWatson Highly recommended if you like iron age, Britain, Jules Caesar and some druid magic climate. Books #2, #3 were previous parts of \u0026quot;The Iron Age Trilogy\u0026quot; and #1 was \u0026quot;Jak Zawsze\u0026quot; by #ZygmuntMiłoszewski, also recommended\n\u0026mdash; Bartosz Ratajczyk (he/him) (@b_ratajczyk) February 16, 2018 ",
"ref": "/books/"
},{
"title": "Using cached datasets in ssisUnit",
"date": "",
"description": "",
"body": "In the previous post, I wrote about using datasets in the ssisUnit test. By default, the dataset query is executed against the data source each time the test is run. But we also have an option to store the dataset\u0026rsquo;s result in the test file. In this post, I will show you how you can use it.\nFirst - why would you want to store the datasets within the test file? Well, maybe you don\u0026rsquo;t want to hammer your database with hundreds of requests to prepare the expected data outcome. And/or you want to have everything in your unit test file, and you don\u0026rsquo;t want to write all that CAST/CONVERT information for the datatypes when preparing the dataset?\nThe ssisUnit GUI does not support creating the persisted dataset. If you switch the IsResultsStored flag to true on the dataset\u0026rsquo;s properties, it gives a warning \u0026ldquo;The expected dataset\u0026rsquo;s (\u0026lt;the dataset name\u0026gt;) stored results does not contain data. Populate the Results data table before executing.\u0026rdquo; during the test run.\nTo find out more about it, take a look at the source code. Open the Dataset.cs file and go to the line 126. There is a method PersistToXml() that writes the Dataset as XML.\nIn the lines 146 - 158 it verifies existance of the Results element. If it exists, the XML node results is prepared, and the content of the Results element is saved as CDATA.\nif (Results != null) { xmlWriter.WriteStartElement(\u0026#34;results\u0026#34;); using (var stringWriter = new StringWriter()) { Results.WriteXml(stringWriter, XmlWriteMode.WriteSchema, true); xmlWriter.WriteCData(stringWriter.ToString()); } xmlWriter.WriteEndElement(); } The Results object is defined in line 186 - it\u0026rsquo;s stored internally as the DataTable object. So, it must be serialised to store and retrieve it from the saved XML file:\n[Browsable(false)] public DataTable Results { get; internal set; } The recipe for the cached/stored result is easy: use the \u0026lt;results /\u0026gt; element in your .ssisUnit file and fill it with the serialised DataTable.\nPreparing the datatable\rI will again use the 15_Users_Dataset.dtsx package for testing. I will create 15_Users_Dataset_Persisted.ssisUnit file as a copy of the 15_Users_Dataset.ssisUnit file and replace the Empty table test: expected dataset as the cached result. To prepare the serialised datatable I used parts of an example from the documentation and Linqpad. Go to the Scripts folder and open the script 15_Users_Dataset_Persisted.linq. The file has some comments, so it should be easy to understand. The main part is:\nDataColumn name = new DataColumn(\u0026#34;Name\u0026#34;, typeof(System.String)); name.MaxLength = 50; table.Columns.Add(name); DataColumn login = new DataColumn(\u0026#34;Login\u0026#34;, typeof(System.String)); login.MaxLength = 12; table.Columns.Add(login); table.Columns.Add(\u0026#34;IsActive\u0026#34;, typeof(System.Boolean)); table.Columns.Add(\u0026#34;Id\u0026#34;, typeof(System.Int32)); table.Columns.Add(\u0026#34;SourceSystemId\u0026#34;, typeof(System.Byte)); table.Columns.Add(\u0026#34;IsDeleted\u0026#34;, typeof(System.Boolean)); // Login is char(12), the database has ANSI_PADDING = ON, so pad the string with spaces table.Rows.Add(new object[] { \u0026#34;Name 1\u0026#34;, \u0026#34;Login 1 \u0026#34;, 1, 1, 2, 0 }); table.Rows.Add(new object[] { \u0026#34;Name 2\u0026#34;, \u0026#34;Login 2 \u0026#34;, 1, 2, 2, 0 }); table.Rows.Add(new object[] { \u0026#34;Name 3\u0026#34;, \u0026#34;Login 3 \u0026#34;, 0, 3, 2, 0 }); If you want to use Linqpad - remember to set the language as C# Statement(s) and hit Execute button. The result of the script is as follows:\n\u0026lt;NewDataSet\u0026gt; \u0026lt;xs:schema id=\u0026#34;NewDataSet\u0026#34; xmlns=\u0026#34;\u0026#34; xmlns:xs=\u0026#34;http://www.w3.org/2001/XMLSchema\u0026#34; xmlns:msdata=\u0026#34;urn:schemas-microsoft-com:xml-msdata\u0026#34;\u0026gt; \u0026lt;xs:element name=\u0026#34;NewDataSet\u0026#34; msdata:IsDataSet=\u0026#34;true\u0026#34; msdata:MainDataTable=\u0026#34;Results\u0026#34; msdata:UseCurrentLocale=\u0026#34;true\u0026#34;\u0026gt; \u0026lt;xs:complexType\u0026gt; \u0026lt;xs:choice minOccurs=\u0026#34;0\u0026#34; maxOccurs=\u0026#34;unbounded\u0026#34;\u0026gt; \u0026lt;xs:element name=\u0026#34;Results\u0026#34;\u0026gt; \u0026lt;xs:complexType\u0026gt; \u0026lt;xs:sequence\u0026gt; \u0026lt;xs:element name=\u0026#34;Name\u0026#34; minOccurs=\u0026#34;0\u0026#34;\u0026gt; \u0026lt;xs:simpleType\u0026gt; \u0026lt;xs:restriction base=\u0026#34;xs:string\u0026#34;\u0026gt; \u0026lt;xs:maxLength value=\u0026#34;50\u0026#34; /\u0026gt; \u0026lt;/xs:restriction\u0026gt; \u0026lt;/xs:simpleType\u0026gt; \u0026lt;/xs:element\u0026gt; \u0026lt;xs:element name=\u0026#34;Login\u0026#34; minOccurs=\u0026#34;0\u0026#34;\u0026gt; \u0026lt;xs:simpleType\u0026gt; \u0026lt;xs:restriction base=\u0026#34;xs:string\u0026#34;\u0026gt; \u0026lt;xs:maxLength value=\u0026#34;12\u0026#34; /\u0026gt; \u0026lt;/xs:restriction\u0026gt; \u0026lt;/xs:simpleType\u0026gt; \u0026lt;/xs:element\u0026gt; \u0026lt;xs:element name=\u0026#34;IsActive\u0026#34; type=\u0026#34;xs:boolean\u0026#34; minOccurs=\u0026#34;0\u0026#34; /\u0026gt; \u0026lt;xs:element name=\u0026#34;Id\u0026#34; type=\u0026#34;xs:int\u0026#34; minOccurs=\u0026#34;0\u0026#34; /\u0026gt; \u0026lt;xs:element name=\u0026#34;SourceSystemId\u0026#34; type=\u0026#34;xs:unsignedByte\u0026#34; minOccurs=\u0026#34;0\u0026#34; /\u0026gt; \u0026lt;xs:element name=\u0026#34;IsDeleted\u0026#34; type=\u0026#34;xs:boolean\u0026#34; minOccurs=\u0026#34;0\u0026#34; /\u0026gt; \u0026lt;/xs:sequence\u0026gt; \u0026lt;/xs:complexType\u0026gt; \u0026lt;/xs:element\u0026gt; \u0026lt;/xs:choice\u0026gt; \u0026lt;/xs:complexType\u0026gt; \u0026lt;/xs:element\u0026gt; \u0026lt;/xs:schema\u0026gt; \u0026lt;Results\u0026gt; \u0026lt;Name\u0026gt;Name 1\u0026lt;/Name\u0026gt; \u0026lt;Login\u0026gt;Login 1 \u0026lt;/Login\u0026gt; \u0026lt;IsActive\u0026gt;true\u0026lt;/IsActive\u0026gt; \u0026lt;Id\u0026gt;1\u0026lt;/Id\u0026gt; \u0026lt;SourceSystemId\u0026gt;2\u0026lt;/SourceSystemId\u0026gt; \u0026lt;IsDeleted\u0026gt;false\u0026lt;/IsDeleted\u0026gt; \u0026lt;/Results\u0026gt; \u0026lt;Results\u0026gt; \u0026lt;Name\u0026gt;Name 2\u0026lt;/Name\u0026gt; \u0026lt;Login\u0026gt;Login 2 \u0026lt;/Login\u0026gt; \u0026lt;IsActive\u0026gt;true\u0026lt;/IsActive\u0026gt; \u0026lt;Id\u0026gt;2\u0026lt;/Id\u0026gt; \u0026lt;SourceSystemId\u0026gt;2\u0026lt;/SourceSystemId\u0026gt; \u0026lt;IsDeleted\u0026gt;false\u0026lt;/IsDeleted\u0026gt; \u0026lt;/Results\u0026gt; \u0026lt;Results\u0026gt; \u0026lt;Name\u0026gt;Name 3\u0026lt;/Name\u0026gt; \u0026lt;Login\u0026gt;Login 3 \u0026lt;/Login\u0026gt; \u0026lt;IsActive\u0026gt;false\u0026lt;/IsActive\u0026gt; \u0026lt;Id\u0026gt;3\u0026lt;/Id\u0026gt; \u0026lt;SourceSystemId\u0026gt;2\u0026lt;/SourceSystemId\u0026gt; \u0026lt;IsDeleted\u0026gt;false\u0026lt;/IsDeleted\u0026gt; \u0026lt;/Results\u0026gt; \u0026lt;/NewDataSet\u0026gt; In the .ssisUnit test file, the \u0026lt;NewDataSet /\u0026gt; element should be wrapped in the \u0026lt;result /\u0026gt; as CDATA:\nThe part that took me a lot of time to figure out was the Login column. In the database, it has char(12) data type. It is very dependent on the ANSI_PADDING setting. The recommended setting is ANSI_PADDING ON, which means it pads the strings with trailing spaces for the char data type columns. And that\u0026rsquo;s the setting for the ssisUnitLearningDB - the database used for this blog post series.\nHow does it affect the test? When you persist the char column in the test file, you have to also pad the string with spaces if the column was created with ANSI_PADDING ON setting. If you don\u0026rsquo;t pad the string column, the test will not pass. To illustrate it I prepared some additional datasets and assertions.\nI\u0026rsquo;m testing different dataset settings and how it affects the assertions results. Even if you set the dataset as persisted, ssisUnit expects the query, so I\u0026rsquo;m mostly using T-SQL\u0026rsquo;s comment --as a placeholder. But I also checked if it has to be the same query as I used to prepare the persisted dataset. Running the test you will see, that without padding the Login column data the assertions don\u0026rsquo;t return True.\nThe data types\rWhen you create the DataTable object you add the columns of the specific type, like\ntable.Columns.Add(\u0026#34;Id\u0026#34;, typeof(System.Int32)) How do you know what types you should use? You can guess of course (I did!), but you don\u0026rsquo;t have to. Take a look at the RetrieveDataTable() method in the DataTable.cs file (especially the lines 216 - 237). This is how ssisUnit converts the query result to the DataTable object for further comparison:\nif (!IsResultsStored) { using (var command = Helper.GetCommand(ConnectionRef, Query)) { command.Connection.Open(); using (IDataReader expectedReader = command.ExecuteReader()) { var ds = new DataSet(); ds.Load(expectedReader, LoadOption.OverwriteChanges, new[] { \u0026#34;Results\u0026#34; }); if (ds.Tables.Count \u0026lt; 1) { throw new ApplicationException( string.Format( CultureInfo.CurrentCulture, \u0026#34;The dataset (\\\u0026#34;{0}\\\u0026#34;) did not retrieve any data.\u0026#34;, Name)); } return ds.Tables[0]; } } } I took this part of the code (along with the Helper.GetCommand()) and expanded it with writing metadata of the DataTable object. I use Linqpad and its Dump() method to quickly see the content. The full script GetDataTableMetadata.linq is located in the Scripts directory of the project.\nAs you can see, the Login column has Length == 12 and MaxLength == 12. It means, that string is padded with blanks. Also notice, that the bit column is mapped to System.Boolean and the tinyint column is mapped to System.Int32 - just like Int.\nTo sum up\rIf you want to store the dataset in the test file - no problem. The ssisUnit GUI does not support it, but the engine does. Use provided scripts to get the metadata of the dataset and serialise the prepared dataset. Then set the IsResultsStored flag to True and paste the serialised code directly to the ssisUnit file using \u0026lt;results /\u0026gt; and CDATA.\n",
"ref": "/2018/05/31/using-cached-datasets-in-ssisunit/"
},{
"title": "Using Connections and Datasets in ssisUnit",
"date": "",
"description": "",
"body": "One of the elements you can define in ssisUnit is a Dataset. In this post of the ssisUnit series, I will show you how to prepare and use it later in a test.\nThe Dataset\rAs you can see in the image above, the dataset is a named result that contains data. It has five attributes (although you can see just four of them in the GUI):\nName (required) - the name of the dataset, used for referencing the dataset in a test IsResultsStored (required) - the boolean flag informing if we have the results cached (true) or we always ask the external source (false) Connection (required) - the connection reference for the dataset retrieval Query (optional) - the query for the dataset definition Results (optional) - the cached dataset (not available in the GUI) You can find all of the attributes in the SsisUnit.xsd file in the source code.\nWhy would you use the dataset? Verifying the numbers is almost always enough, but I have the cases I have to check the actual data after the processing. Like testing the SCD1 or SCD2 attributes, verifying the MERGE statements, checking UPDATEs on the data. I prepare a small reference data and compare it using DataCompareCommand.\nThe goal: compare data in the table (actual data) with the reference data (expected data)\nThe scenario\rI will test loading users data from the staging table to the destination table. I have the stg.Users table with the information from the source system, and I want to transfer it to the dbo.Users table using the rules:\nthe natural key in dbo.Users is (SourceId, SourceSystemId) the natural key in stg.Users is (Id, SourceSystemId) if the record from stg.Users is not in dbo.Users, then add it if the record exists in dbo.Users and has different data than the record in stg.Users, then update it if the record exists in dbo.Users and doesn\u0026rsquo;t exist in stg.Users - mark it as deleted (IsDeleted = 1) when the new record is inserted, the columns InsertedAuditId and UpdatedAuditId have the same value when the record is updated, the column UpdateAuditId is updated, and UpdateAuditId \u0026gt; InsertedAuditId only the records with the changed attributes get an update I will use the MERGE statement for the data loading process. In the package I will also use two _SQL Task_s for auditing. I have meta.Audit table for tracking the executions of the packages and I use meta.uspPackageStart and meta.uspPackageFinish stored procedures to insert and update the audit data. The package looks like in the picture below.\nI will define the tests to run later in the post.\nDefining the ssisUnit connections\rOne of the dataset\u0026rsquo;s attributes is a Connection. It\u0026rsquo;s not any of the connections defined within the SSIS package or the project. It\u0026rsquo;s the connection defined in the ssisUnit test suite.\nThe connection has four attributes:\nConnection string - the connection string to the external source Connection type - the type of the connection (supported types: ConnectionString and Ado.Net) Invariant type - the type of the Ado.Net provider (when used) Reference name - the name of the connection to use in SqlCommand or DatasetCommand I use a Connection string for the database communication. When you add a new connection, you have only the text box for entering the connection string, or you can pick the provider to enter all the data.\nAfter you fill the connection information, the wizard detects the type and displays them in the proper format.\nThe connection reference should be ready to use. But when you try to use it in the SqlCommand right after creating it you will see the error:\nIt\u0026rsquo;s not a ssisUnit problem, but the GUI problem - it has some issues with refreshing the object information. Just save the test, open it again, and everything will work fine.\nThe test, the setup, the teardown\rNow you can use SQL commands to query the database about the data. You can also prepare and destroy data before and after the test during the setup and teardown phases. But first - the test itself. Create a new test SQL MERGE Users: Empty table using the SQL MERGE Users task. Now create the SqlCommand in the TestSetup node using right-click, and add the following code to setup stg.Users data:\nWITH stgUsers AS ( SELECT * FROM ( VALUES (\u0026#39;Name 1\u0026#39;, \u0026#39;Login 1\u0026#39;, 1, 1, 2, -1), (\u0026#39;Name 2\u0026#39;, \u0026#39;Login 2\u0026#39;, 1, 2, 2, -1), (\u0026#39;Name 3\u0026#39;, \u0026#39;Login 3\u0026#39;, 0, 3, 2, -1) )x (Name, Login, IsActive, Id, SourceSystemId, InsertedAuditId) ) INSERT INTO stg.Users ( Name, Login, IsActive, Id, SourceSystemId, InsertedAuditId ) SELECT Name, Login, IsActive, Id, SourceSystemId, InsertedAuditId FROM stgUsers ; You should have something like in the picture below:\nAs you can see - the GUI has a feature of not displaying the name of the command in the test tree on the left.\nAs there is a code to set up the test I also have the code to tidy up after. I use two TRUNCATE TABLE commands to clean up the stg.Users and dbo.Users tables.\n\u0026lt;SqlCommand name=\u0026#34;SqlCommand: TRUNCATE stg.Users\u0026#34; connectionRef=\u0026#34;ssisUnitLearningDB\u0026#34; returnsValue=\u0026#34;false\u0026#34;\u0026gt;TRUNCATE TABLE stg.Users;\u0026lt;/SqlCommand\u0026gt; \u0026lt;SqlCommand name=\u0026#34;SqlCommand: TRUNCATE dbo.Users\u0026#34; connectionRef=\u0026#34;ssisUnitLearningDB\u0026#34; returnsValue=\u0026#34;false\u0026#34;\u0026gt;TRUNCATE TABLE dbo.Users;\u0026lt;/SqlCommand\u0026gt; The test cases\rThere are a few test cases I want to run for the scenario mentioned above. I will cover them in this and following posts:\nthe stg.Users table contains the data, and the dbo.Users table is empty the stg.Users table has the data, that does not exist in the dbo.Users table and dbo.Users table is not empty the stg.Users table is not empty but lacks some records from the dbo.Users table the stg.Users table is empty and the dbo.Users table is not empty Within those test cases I want to check:\nthe number of records loaded to dbo.Users if the attributes are same in stg.Users and dbo.Users if InsertedAuditId is equal to UpdatedAuditId or greater when appropriate if every required record marked with IsDeleted = 1 This time I will not use the variables to hold the results. I will run separate SQL statements to verify the count of the records in the table and write expected and actual datasets code to verify if they match. I will focus on the dataset part.\nI will load three records to the stg.Users table (with the T-SQL code above) and process them with the package. Then I will check if these records are loaded to the dbo.Users table and if they have the same values (excluding audits and the Id column). To verify I will use just the SELECT statement from the code above as the reference data and compare it to the SELECT statement on dbo.Users table.\nCreating and using the datasets (finally!)\rTo create the dataset go to the Datasets node, right-click, and select Add Dataset. Prepare two datasets: Empty table test: expected dataset and Empty table test: actual dataset. At the beginning try to be very descriptive, it helps with getting familiar with the terminology and the process.\nFor the expected dataset use the T-SQL code above changing InsertAuditId = -1 column to IsDeleted = 0, set the ssisUnitLearningDB connection prepared in the previous steps and leave IsResultsStored flag as false. For the actual dataset use a SELECT query on the dbo.Users table. You should get the code similar to this one:\n\u0026lt;DatasetList\u0026gt; \u0026lt;Dataset name=\u0026#34;Empty table test: expected dataset\u0026#34; connection=\u0026#34;ssisUnitLearningDB\u0026#34; isResultsStored=\u0026#34;false\u0026#34;\u0026gt; \u0026lt;query\u0026gt; \u0026lt;![CDATA[SELECT * FROM ( VALUES (\u0026#39;Name 1\u0026#39;, \u0026#39;Login 1\u0026#39;, 1, 1, 2, 0), (\u0026#39;Name 2\u0026#39;, \u0026#39;Login 2\u0026#39;, 1, 2, 2, 0), (\u0026#39;Name 3\u0026#39;, \u0026#39;Login 3\u0026#39;, 0, 3, 2, 0) )x (Name, Login, IsActive, Id, SourceSystemId, IsDeleted) ;]]\u0026gt; \u0026lt;/query\u0026gt; \u0026lt;/Dataset\u0026gt; \u0026lt;Dataset name=\u0026#34;Empty table test: actual dataset\u0026#34; connection=\u0026#34;ssisUnitLearningDB\u0026#34; isResultsStored=\u0026#34;false\u0026#34;\u0026gt; \u0026lt;query\u0026gt; \u0026lt;![CDATA[SELECT Name, Login, IsActive, SourceId, SourceSystemId, IsDeleted FROM dbo.Users ;]]\u0026gt; \u0026lt;/query\u0026gt; \u0026lt;/Dataset\u0026gt; \u0026lt;/DatasetList\u0026gt; To use the dataset in the test use Add Command / DatasetCommand on the right-click menu on the Assert node. Use the created expected and actual datasets and leave TaskResult as Success.\nThe whole test looks like this:\n\u0026lt;Test name=\u0026#34;SQL MERGE Users: Empty table\u0026#34; package=\u0026#34;15_Users_Dataset\u0026#34; task=\u0026#34;{FB549B65-6F0D-4794-BA8E-3FF975A6AE0B}\u0026#34; taskResult=\u0026#34;Success\u0026#34;\u0026gt; \u0026lt;TestSetup\u0026gt; \u0026lt;SqlCommand name=\u0026#34;SqlCommand: setup stg.Users\u0026#34; connectionRef=\u0026#34;ssisUnitLearningDB\u0026#34; returnsValue=\u0026#34;false\u0026#34;\u0026gt;WITH stgUsers AS ( SELECT * FROM ( VALUES (\u0026#39;Name 1\u0026#39;, \u0026#39;Login 1\u0026#39;, 1, 1, 2, -1), (\u0026#39;Name 2\u0026#39;, \u0026#39;Login 2\u0026#39;, 1, 2, 2, -1), (\u0026#39;Name 3\u0026#39;, \u0026#39;Login 3\u0026#39;, 0, 3, 2, -1) )x (Name, Login, IsActive, Id, SourceSystemId, InsertedAuditId) ) INSERT INTO stg.Users ( Name, Login, IsActive, Id, SourceSystemId, InsertedAuditId ) SELECT Name, Login, IsActive, Id, SourceSystemId, InsertedAuditId FROM stgUsers ;\u0026lt;/SqlCommand\u0026gt; \u0026lt;/TestSetup\u0026gt; \u0026lt;Assert name=\u0026#34;Assert: Added 3 records\u0026#34; expectedResult=\u0026#34;3\u0026#34; testBefore=\u0026#34;false\u0026#34; expression=\u0026#34;false\u0026#34;\u0026gt; \u0026lt;SqlCommand name=\u0026#34;\u0026#34; connectionRef=\u0026#34;ssisUnitLearningDB\u0026#34; returnsValue=\u0026#34;true\u0026#34;\u0026gt;SELECT COUNT(*) FROM dbo.Users;\u0026lt;/SqlCommand\u0026gt; \u0026lt;/Assert\u0026gt; \u0026lt;Assert name=\u0026#34;Assert: dbo.Users has expected records\u0026#34; expectedResult=\u0026#34;True\u0026#34; testBefore=\u0026#34;false\u0026#34; expression=\u0026#34;false\u0026#34;\u0026gt; \u0026lt;DataCompareCommand name=\u0026#34;\u0026#34; expected=\u0026#34;Empty table test: expected dataset\u0026#34; actual=\u0026#34;Empty table test: actual dataset\u0026#34; /\u0026gt; \u0026lt;/Assert\u0026gt; \u0026lt;TestTeardown\u0026gt; \u0026lt;SqlCommand name=\u0026#34;SqlCommand: TRUNCATE stg.Users\u0026#34; connectionRef=\u0026#34;ssisUnitLearningDB\u0026#34; returnsValue=\u0026#34;false\u0026#34;\u0026gt;TRUNCATE TABLE stg.users;\u0026lt;/SqlCommand\u0026gt; \u0026lt;/TestTeardown\u0026gt; \u0026lt;/Test\u0026gt; Save the test and hit the Run button.\nThe test failed\nThe actual result (False) did not match the expected result (True). 3 rows differ between the expected \u0026ldquo;Empty table test: expected dataset\u0026rdquo; and actual \u0026ldquo;Empty table test: actual dataset\u0026rdquo; datasets.\nTuning the test\rThere are few things to have in mind when preparing the datasets:\nthe dataset compare checks the data AND the data types if you don\u0026rsquo;t specify ORDER BY, most of the times you don\u0026rsquo;t get the required order When you check the datatypes of the expected and actual datasets you will see the difference:\nEXEC sp_describe_first_result_set N\u0026#39; SELECT * FROM ( VALUES (\u0026#39;\u0026#39;Name 1\u0026#39;\u0026#39;, \u0026#39;\u0026#39;Login 1\u0026#39;\u0026#39;, 1, 1, 2, 0), (\u0026#39;\u0026#39;Name 2\u0026#39;\u0026#39;, \u0026#39;\u0026#39;Login 2\u0026#39;\u0026#39;, 1, 2, 2, 0), (\u0026#39;\u0026#39;Name 3\u0026#39;\u0026#39;, \u0026#39;\u0026#39;Login 3\u0026#39;\u0026#39;, 0, 3, 2, 0) )x (Name, Login, IsActive, Id, SourceSystemId, IsDeleted) \u0026#39;; EXEC sp_describe_first_result_set N\u0026#39; SELECT Name, Login, IsActive, SourceId, SourceSystemId, IsDeleted FROM dbo.Users; \u0026#39;; Column Type (expected) Type (actual) Name VARCHAR(6) VARCHAR(50) Login VARCHAR(7) CHAR(12) IsActive INT BIT SourceId INT INT SourceSystemId INT TINYINT IsDeleted INT BIT Change the expected dataset to match the data types:\nSELECT * FROM ( VALUES (CAST(\u0026#39;Name 1\u0026#39; AS VARCHAR(50)), CAST(\u0026#39;Login 1\u0026#39; AS CHAR(12)), CAST(1 AS BIT), CAST(1 AS INT), CAST(2 AS TINYINT), CAST(0 AS BIT)), (CAST(\u0026#39;Name 2\u0026#39; AS VARCHAR(50)), CAST(\u0026#39;Login 2\u0026#39; AS CHAR(12)), CAST(1 AS BIT), CAST(2 AS INT), CAST(2 AS TINYINT), CAST(0 AS BIT)), (CAST(\u0026#39;Name 3\u0026#39; AS VARCHAR(50)), CAST(\u0026#39;Login 3\u0026#39; AS CHAR(12)), CAST(0 AS BIT), CAST(3 AS INT), CAST(2 AS TINYINT), CAST(0 AS BIT)) )x (Name, Login, IsActive, Id, SourceSystemId, IsDeleted) ORDER BY Id; And add an ORDER BY clause to the actual dataset:\nSELECT Name, Login, IsActive, SourceId, SourceSystemId, IsDeleted FROM dbo.Users ORDER BY SourceId; Now the test should pass.\nSo, when using datasets:\ncreate the connections to the database prepare expected and actual datasets with some T-SQL code make sure the data types in expected and actual sets match (use CAST and CONVERT if necessary) use ORDER BY make sure that the expected and actual datasets have the same number of rows In the next post, I will show you how to persist the datasets in the test file and use the IsResultStored flag.\n",
"ref": "/2018/04/30/using-connections-and-datasets-in-ssisunit/"
},{
"title": "Testing database connections with ssisUnit",
"date": "",
"description": "",
"body": "Previously we successfully prepared tests for variables and parameters using VariableCommandand and ParameterCommand. Now it\u0026rsquo;s time to communicate with the database, and for that, I will use connection manager defined on the project level. I know from the ssisUnit tutorials it works perfect with package connection managers, so it\u0026rsquo;s time to verify it against the projects. I will test the package 10_ProjectCM.dtsx - it is just getting a single value from the table in a database and storing it in a variable. All the packages and unit tests are on my GitHub.\nThe package contains three SQL Tasks: the first just checks if we can communicate with the database using SELECT 1 statement, the second gets the information from the table, and the third repeats the second on the container level.\nThe database\rI\u0026rsquo;m not using database projects in SSDT on a daily basis. This learning project is an excellent reason to use it more, so I\u0026rsquo;m building the database using the SSDT database project. The details of using them are not part of this series, so if you are not familiar with database projects just use the SQL files to create the required objects. The data is stored in the Scripts subfolder.\nThe tests\rThe package 10_ProjectCM.dtsx uses the meta.SourceSystems table to get SourceSystemId value for the package. After retrieval, it stores the value in the variable. So again I\u0026rsquo;m using VariableCommand in my tests. But this time it\u0026rsquo;s for values from the SQL commands.\nFirst - just verify if a connection to the database engine works correctly. It\u0026rsquo;s enough to do just SELECT one = 1 - if it works it means the communication with the database engine works well.\nTo verify it I\u0026rsquo;m writing a test:\n\u0026lt;Test name=\u0026#34;SQL SELECT 1\u0026#34; package=\u0026#34;10\\_ProjectCM\u0026#34; task=\u0026#34;{6781FCBA-83D7-45E3-B946-A8894B6BF924}\u0026#34; taskResult=\u0026#34;Success\u0026#34;\u0026gt; \u0026lt;Assert name=\u0026#34;Assert: Returns 1\u0026#34; expectedResult=\u0026#34;1\u0026#34; testBefore=\u0026#34;false\u0026#34; expression=\u0026#34;false\u0026#34;\u0026gt; \u0026lt;VariableCommand name=\u0026#34;Value\u0026#34; operation=\u0026#34;Get\u0026#34; value=\u0026#34;\u0026#34;/\u0026gt; \u0026lt;/Assert\u0026gt; \u0026lt;/Test\u0026gt; The expected result is 1 - I\u0026rsquo;m assigning the result to the variable named Value so - as I mentioned earlier - I will use the VariableCommand. The two remaining tests work similar - SQL Task executes the SELECT statement against the database and assigns the results to the variables. I\u0026rsquo;m also testing the statement within a container to check if everything works when I\u0026rsquo;m not at the package level scope. The tests are:\n\u0026lt;Test name=\u0026#34;SQL Get SourceSystemId (Package Scope)\u0026#34; package=\u0026#34;10\\_ProjectCM\u0026#34; task=\u0026#34;{CA3E01B3-A958-4A12-81C9-1CB8F40410AF}\u0026#34; taskResult=\u0026#34;Success\u0026#34;\u0026gt; \u0026lt;Assert name=\u0026#34;Assert: SourceSystemId == 2\u0026#34; expectedResult=\u0026#34;2\u0026#34; testBefore=\u0026#34;false\u0026#34; expression=\u0026#34;false\u0026#34;\u0026gt; \u0026lt;VariableCommand name=\u0026#34;SourceSystemId\\_Package\u0026#34; operation=\u0026#34;Get\u0026#34; value=\u0026#34;\u0026#34;/\u0026gt; \u0026lt;/Assert\u0026gt; \u0026lt;/Test\u0026gt; \u0026lt;Test name=\u0026#34;SQL Get SourceSystemId (Container Scope)\u0026#34; package=\u0026#34;10\\_ProjectCM\u0026#34;task=\u0026#34;{64643f10-70c3-49e3-85b8-7b3ddebbe377}\u0026#34; taskResult=\u0026#34;Success\u0026#34;\u0026gt; \u0026lt;Assert name=\u0026#34;Assert: SourceSystemId = 2\u0026#34; expectedResult=\u0026#34;2\u0026#34; testBefore=\u0026#34;false\u0026#34; expression=\u0026#34;false\u0026#34;\u0026gt; \u0026lt;VariableCommand name=\u0026#34;SourceSystemId\\_ContainerScope\u0026#34; operation=\u0026#34;Get\u0026#34; value=\u0026#34;\u0026#34;/\u0026gt; \u0026lt;/Assert\u0026gt; \u0026lt;/Test\u0026gt; Nothing extraordinary. As you see - verifying if the task returns the expected value for the variable just repeat the step from the previous post.\nSummary\rProject level connection managers work well at the package and container level. Out of the box. You don\u0026rsquo;t have to set up the ConnectionRef elements - they are for different purposes (as you will see in the following posts). You deal with the results the same way as with the other variables.\n",
"ref": "/2018/04/13/testing-database-connections-with-ssisunit/"
},{
"title": "Writing first tests with ssisUnit",
"date": "",
"description": "",
"body": "Previously I wrote about the importance of testing the SSIS packages and introduced you to ssisUnit. In this post, I will show you how to write simple tests for the variables and parameters using Test Suite Builder. As I wrote before: just start slow and small, don\u0026rsquo;t write your first tests for the most complicated part of the package.\nCreate a new SSIS project and use the automatically generated Package.dtsx. Open it and add two parameters:\nfirst, a regular parameter, an Int32 type with value 10 second, an encrypted String parameter, with value 123qwe!@# marked as sensitive Then create two variables with the package scope (by default all variables get the package scope):\nfirst - the String with G10 value second - also the String, but this time we use an expression to get value \u0026ldquo;E\u0026rdquo; and the value of the Int32 package parameter: \u0026quot;E\u0026quot; + (DT_WSTR, 2)@[$Package::pPackageParameter] Now let\u0026rsquo;s write the tests for parameters and variables.\nBut first\rIf you have read the Getting Started and the Product Sample Package from the ssisUnit docs you probably saw, that they use the Package Deployment Model, not the Project Deployment Model we use and love since SQL Server 2012. But it\u0026rsquo;s no problem for us, as ssisUnit also supports the projects. It\u0026rsquo;s not the .dtproj files though, but the .ispac files. So - before we prepare our tests we have to compile the project.\nThe package reference\rWe will start from scratch and won\u0026rsquo;t use the New From Package\u0026hellip; option. After the compilation, open the Test Suite Builder, choose File \u0026gt; New\u0026hellip; (or Ctrl + N, or pick the first icon on the toolbar from the left) and go to the Package List node. To test the packages we have to reference them in the test file, and we do that by _PackageRef_s. Right-click on the node and select Add PackageRef. On the right side, click the ProjectPath line and then click the ellipsis to set the .ispac file.\nFor my ssisUnitLearning project, I have it in the ssisUnitLearning\\bin\\Development subfolder. Now pick the package you created - go to the PackagePath line, click the ellipsis and select the package. Leave the rest of the fields with the default values. The last part, for now, is to give the PackageRef a name. I choose the name of the package. In the end, you should have something similar to the picture below. Save the test.\nThe first test\rNow we can start testing the package. Go to the Tests node, right-click and select Add Test. On the PackageLocation line pick the package to test (the list is populated from the _PackageRef_s). Now go to the Task line, and click the ellipsis. If you did everything with the instructions above you should see an error like this:\nThe error states that it can\u0026rsquo;t find our package within the compiled project. And it\u0026rsquo;s correct. When we want to test the package contained in the project we have to give just a name of the package, not the path to the package on disk. Go to the package reference and delete the path to the package file leaving just the name of the package file. Like this:\nNow go back to the test node, click the ellipsis you should see the window with the package name. Pick it and click OK. Give your test a name (I use the package or task name), and save the test file. On the left side, the node still has a name Test1 and is in red. The test name will refresh after you pick a different node on the tree. And the colour is red because you didn\u0026rsquo;t finish the test.\nThe test node is just the container for the assertions. Let\u0026rsquo;s build first - we will check if the pProjectParameter has the value of 10. Right-click the test name node on the tree, select Add Assert and define the ExpectedResult as 10. To remember that I\u0026rsquo;m writing an assert I add \u0026lsquo;Assert:\u0026rsquo; before each assertion. So I end up with something like below (or in the post\u0026rsquo;s header):\nThe name of the assert refreshes in the tree after you select a different node and it will also be red. We have an assertion (what we expect the test to return) and now we have to run a command to return some kind of the result. We are testing the package parameter, so pick the ParameterCommand right-clicking the assert\u0026rsquo;s name.\nWe will test the pPackageParameter in the package, so select Package from the drop-down list as the ParameterType and write the parameter\u0026rsquo;s name in the ParameterName line. Leave the rest with the default values.\nThe test is now ready. Hit the play button on the bar (or use Ctrl + R or use Test Suite \u0026gt; Run Suite from the menu). Tada! Your first test passes!\nThe result shows 1 test run, 1 test passed, 2 asserts run, 2 asserts passed.\nWait, what? We have prepared only one assert, why does it show two?\nThe second assert is: \u0026ldquo;Task Completed: Actual result (Success) was equal to the expected result (Success).\u0026rdquo;. Great. Where does it come from? Let\u0026rsquo;s find out.\nWe have two places (up to now) that use \u0026ldquo;Task completed\u0026rdquo; setting: the test and the command. I will set TaskResult = Failure for the ParameterCommand, save and run again. The test still passes, so it must be the setting on the test node.\nBut before setting up the test node a quick verification: save the test file and open it again. Go to the ParameterCommand node and check the value of TestResult line. It should be Success again, even when you set it to a different value. If you take a look at the source of the .ssisUnit file, you will see, that TestResult setting isn\u0026rsquo;t stored in the configuration.\n\u0026lt;Test name=\u0026#34;01\\_OnlyParametersAndVariables\u0026#34; package=\u0026#34;01\\_OnlyParametersAndVariables.dtsx\u0026#34; task=\u0026#34;{5A229C3E-AE1F-48D1-AEF4-0EEB5C9E081E}\u0026#34; taskResult=\u0026#34;Success\u0026#34;\u0026gt; \u0026lt;Assertname=\u0026#34;Assert: pPackageParameter should be 10\u0026#34; expectedResult=\u0026#34;10\u0026#34; testBefore=\u0026#34;false\u0026#34; expression=\u0026#34;false\u0026#34;\u0026gt; \u0026lt;ParameterCommandname=\u0026#34;\u0026#34; operation=\u0026#34;Get\u0026#34; parameterName=\u0026#34;pPackageParameter\u0026#34; parameterType=\u0026#34;Package\u0026#34; value=\u0026#34;\u0026#34;/\u0026gt; \u0026lt;/Assert\u0026gt; \u0026lt;/Test\u0026gt; Now change the setting for the test and run the suite. The package test didn\u0026rsquo;t finish with a Failure (it ended with the Success), so the other assert didn\u0026rsquo;t even run.\nAs a homework - check how will the test suite run, when you change TestResult to Cancelled or Completed.\nNext tests\rYou successfully completed the first test. Now write the test for the second parameter that is sensible. Take a moment, think about it and try to write on your own. Then compare to the assert and the command below.\nA note here. By default, I have the EncryptSensitiveWithUserKey security setting for my packages, and I run the tests on the same machine I have prepared the packages. So the UserKey security is not an issue. But what if we would run the tests on another computer? Or with the EncryptAllWithPassword setting for example? I will show you in the following posts.\nNow let\u0026rsquo;s switch to the variables. You know how to test the package parameters, the variables are similar so there should be no problem for you to prepare the tests by yourself. It\u0026rsquo;s also easier because when you use VariableCommand, you don\u0026rsquo;t set the Project/Package scope.\nOne question that may come to your mind when testing the variable calculated with an expression is \u0026ldquo;should I use the Expression line setting when testing the expressions?\u0026rdquo;. The answer is NO. The expression is to evaluate some .NET code expression, and it\u0026rsquo;s something to check in the next blog posts, but for now just leave it always as false.\nIf you want to compare your tests for the variables open the source code of your .ssisUnit test file (or switch to the XML tab on the Assert node) and take a look below.\n\u0026lt;Assert name=\u0026#34;Assert: Global variable == G10\u0026#34; expectedResult=\u0026#34;G10\u0026#34; testBefore=\u0026#34;false\u0026#34; expression=\u0026#34;false\u0026#34;\u0026gt; \u0026lt;VariableCommand name=\u0026#34;GlobalVariable\u0026#34; operation=\u0026#34;Get\u0026#34; value=\u0026#34;\u0026#34; /\u0026gt; \u0026lt;/Assert\u0026gt; \u0026lt;Assert name=\u0026#34;Assert: Global variable with expression\u0026#34; expectedResult=\u0026#34;E10\u0026#34; testBefore=\u0026#34;false\u0026#34; expression=\u0026#34;false\u0026#34;\u0026gt; \u0026lt;VariableCommand name=\u0026#34;GlobalVariableWithExpression\u0026#34; operation=\u0026#34;Get\u0026#34; value=\u0026#34;\u0026#34; /\u0026gt; \u0026lt;/Assert\u0026gt; Congratulations. You now know how o write simple tests for parameters and variables.\nThe scope\rThe last thing to check today is the variable\u0026rsquo;s scope. You have checked the variables with the default scope of the package. Now test the variable with the scope of a container.\nCreate an empty Sequence Container SEQC Some container and add a new variable ContainerVariable. Then change the scope of the variable to the container. To do it click the Move Variable button in the Variables window and pick the SEQC Some container node in the Select New Scope window. Then click OK.\nNow write the test for the ContainerVariable just as you wrote the test for the previous variables. Be sure to add the assert and the command at the SEQC Some container level. You should get a test similar to this one:\nWhat would happen if we tried to test the variable on the package level? After you define the same test but in the previous test tree that is made on the package level (01_OnlyParametersAndVariables from the picture above), you will get an error. The assert command failed with the following exception: Failed to lock variable \u0026ldquo;ContainerVariable\u0026rdquo; for read access with error 0xC0010001 \u0026ldquo;The variable cannot be found. This occurs when an attempt is made to retrieve a variable from the Variables collection on a container during execution of the package, and the variable is not there. The variable name may have changed or the variable is not being created.\nThat\u0026rsquo;s all for now. We started from the very beginning with parameters and variables. We checked the variables scope, saw that we can test the sensitive parameters (in some circumstances) and we are ready to test the project parameters. The last one is your homework: create the project parameter and test it with ssisUnit.\nYou can find the sample package 01_OnlyParametersAndVariables.dtsx and ssisUnit test for it on my GitHub.\n",
"ref": "/2018/03/26/writing-first-tests-with-ssisunit/"
},{
"title": "Testing SSIS Projects with ssisUnit",
"date": "",
"description": "",
"body": "During the upcoming SQLDay 2018 conference (10th edition of SQLDay!) I\u0026rsquo;ll be speaking about testing SSIS packages and projects. From my observations, I see that we don\u0026rsquo;t like testing (I\u0026rsquo;m talking about database and ETL people), but when we start doing it - it becomes a natural part of our work. In my current project, we started slow, with some data quality testing for some parts of the process. Today you can hear \u0026ldquo;let\u0026rsquo;s write a test for it\u0026rdquo;, and it\u0026rsquo;s just a regular part of the process.\nI want to take a testing experience a bit further. We already have data quality testing (and the number of tests grows each day), but how about SSIS testing? How can we do it? This post starts the series about testing SSIS packages and projects (mostly projects) with different tools. The first step in our ETL testing will be asking ourselves some questions about testing, then we start doing technical things getting familiar with the ssisUnit framework by John Welch.\nThe ssisUnit project started a long time ago and was hosted on CodePlex, then moved to GitHub. As John works now for Pragmatic Works, the project is also incorporated in the commercial tools and is being developed mainly for their products (BiXpress, LegiTest). The last version of ssisUnit is compiled for SSIS 2014, but you are welcome to use the source code and make all desired changes that will suit your needs.\nBut why?\rWhy would we even think about SSIS testing? Most of the times we already check our work during development, don\u0026rsquo;t we? We carefully craft our packages, run them many times to see the outcome, sometimes we even disable some Control Flow elements and check how the moving parts work alone. We write some SQL to review the data before and after the process. And it\u0026rsquo;s good!\nIt\u0026rsquo;s nothing wrong in testing the packages this way. If it\u0026rsquo;s done at the beginning of your journey to ETL with SSIS. When you work with it for some time you probably see the main three cons:\nit\u0026rsquo;s a manual job it\u0026rsquo;s a manual job it\u0026rsquo;s a manual job (also error-prone, makes you work harder during debugging and you have a headache when you are forced to work with the package looking like a giant spider of tasks, constraints and data flows).\nAnd probably the most important thing: lots of the times you THINK you are testing the package. If it\u0026rsquo;s a simple package for staging the data it\u0026rsquo;s often hard to do something wrong, but what with the more advanced logic? Are you sure you know the data you work with? Do you have some use cases? Probably not. You have some requirements that you discuss with the team (or analyst), and you put the code using the best knowledge you have. And then it starts - data duplicates on some joins, there are nulls when you want to insert data to NOT NULL column, divide by zero, MERGE tries to update the row twice, primary key violations. Sounds familiar?\nWhy do I test?\rBecause of all the above. And more. Because I write a lot of tests when I\u0026rsquo;m programming with PowerShell or C#. Because I\u0026rsquo;ve made all those mistakes and have seen other team members facing them. Because just data quality testing before and after the package run is not enough for me. I want to know if my package is ready to deal with some problems. I want to be sure, that when I change the package in a future, the test will show me the issues before they hit production testing environment. Because someone could change the table I\u0026rsquo;m using in the package (like, say - adding a NOT NULL column at the end) and my package will stop working. Because writing the tests makes me think more about the data and forces me to understand the requirements.\nLast thing before we start\rTesting SSIS packages is hard. The more complex tasks the package is supposed to do, the more complicated testing is. It can take a lot of time to build the tests for the package (and sometimes we can\u0026rsquo;t afford it).\nBut - as with the programming - thinking \u0026ldquo;how could I test this part?\u0026rdquo; impacts your package design. You make it more modular, you start improving logging, creating the tables that hold temporary data for diagnostic purposes. It gives you the comfort of well-done work. Probably you won\u0026rsquo;t test everything, but you have to begin with something. Start with something simple, test one thing. Then test second, third thing. Don\u0026rsquo;t think \u0026ldquo;it\u0026rsquo;s a hell of a job to write the tests for the whole package\u0026rdquo;. It is, but it\u0026rsquo;s like eating an elephant - one piece at a time.\nAfter you start testing, you will change your mindset, and the tests will become the standard tool in your work.\nLet\u0026rsquo;s start\rWhen I started using ssisUnit, I knew almost nothing about it. I just said to myself \u0026ldquo;I will finally start testing SSIS packages, and I will use that thing I\u0026rsquo;ve read about a while ago - ssisUnit\u0026rdquo;.\nI remembered that it\u0026rsquo;s one of the few tools that help to test the SSIS packages. And that it uses XML to define those tests. Nothing more. There are two simple examples of testing individual packages in the documentation (and they are a good entry point), but I wanted to start with testing the packages in the SSIS projects, not the individual packages. Also - you have only the source code that you have to compile yourself so the entry point is not as easy as you might think. But - it\u0026rsquo;s not that complicated. I will show you how to compile ssisUnit in the next posts. For now - you can download the compiled version for SSIS 2017 here.\nWhen you compile it, you have the ssisUnit library, the test runner (command line) and the Test Suite Builder (that also can run the tests in a GUI). The GUI is simple and helps you get started - pick the File \u0026gt; New From Package\u0026hellip; option, choose the package and its tasks, and you\u0026rsquo;re good to write the tests.\nI started with the simple staging package. I\u0026rsquo;ve analysed the examples, watched the recorded sessions by John during the SQLBits (there and there) and prepared my first test for the SQL Task element. And it passed as expected! Wow me! Then I made a second test. And it didn\u0026rsquo;t pass. The program started to throw errors with connection managers (that worked with the previous test). I wrote the third test. It didn\u0026rsquo;t pass, but also didn\u0026rsquo;t throw the engine errors.\nI got confused. And angry - why it doesn\u0026rsquo;t want to work? I also checked the tests in BiXpress - and it gave me exactly the same errors. So I started the project that would help me learn ssisUnit starting with simple tests. Getting the data from a variable, from a variable with an expression, in a container, with a different scope. Each test gave me more insight into the way the ssisUnit works.\nThe picture above is an example of the basic tests I made to learn how to use ssisUnit. I will tell more about them in the next posts. For now, let\u0026rsquo;s talk a bit about\nThe ssisUnit test structure\rssisUnit follows the convention known from another testing frameworks:\nyou can set up individual test, run it, and clean up after it (teardown) the tests are organised in a test suite, that can also have setup and teardown phases the tests execution is automated and repeatable If you create an empty test in the GUI and then save it you have the following content:\n\u0026lt;?xml version=\u0026#34;1.0\u0026#34; encoding=\u0026#34;utf-8\u0026#34; ?\u0026gt; \u0026lt;TestSuite xmlns=\u0026#34;http://tempuri.org/SsisUnit.xsd\u0026#34;\u0026gt; \u0026lt;ConnectionList /\u0026gt; \u0026lt;PackageList /\u0026gt; \u0026lt;DatasetList /\u0026gt; \u0026lt;TestSuiteSetup /\u0026gt; \u0026lt;Setup /\u0026gt; \u0026lt;Tests /\u0026gt; \u0026lt;Teardown /\u0026gt; \u0026lt;TestSuiteTeardown /\u0026gt; \u0026lt;/TestSuite\u0026gt; \u0026lt;TestSuite\u0026gt; groups all the elements. You can set up and tear down the whole suite with - surprise - \u0026lt;TestSuiteSetup\u0026gt; / \u0026lt;TestSuiteTeardown\u0026gt; elements. It\u0026rsquo;s the place where you run all the preparations for the tests to run and clean up after the job is done. The code is run only once. The tests are stored it the \u0026lt;Tests\u0026gt; element, and you can \u0026lt;Setup\u0026gt; and \u0026lt;Teardown\u0026gt; the code that will be applied before and after each test.\nThere are also helper objects:\n\u0026lt;ConnectionList\u0026gt; will hold all the database connections you can use during the testing, \u0026lt;PackageList\u0026gt; contains references to all the packages used within the test suite, \u0026lt;DatasetList\u0026gt; has all the datasets you need for your data compare tests When you start adding the tests you fill the \u0026lt;Tests\u0026gt; element with \u0026lt;Test\u0026gt; elements (you also do the \u0026lt;PackageList\u0026gt; and the \u0026lt;ConnectionList\u0026gt;, but let\u0026rsquo;s skip it for now). The \u0026lt;Test\u0026gt; element contains \u0026lt;Assert\u0026gt;s (how do we expect the result of the test to be), and each assert contains the \u0026lt;Command\u0026gt; element, where we tell the engine what operation it should it do.\nWell, the \u0026lt;Command\u0026gt; element is my global term to the eight possible commands you can run with ssisUnit, but you get the idea.\nThe outcome of the command is then compared with assertions. If they match - the test passes, if not - the test fails. That simple. Below you find an example how the test with one assertion definition looks like (you can have more than one assertion per test).\n\u0026lt;Test name=\u0026#34;Container test\u0026#34; package=\u0026#34;01\\_OnlyParametersAndVariables\u0026#34; task=\u0026#34;{E4C43E00-EC90-4C0D-92CB-CC3D5CD44236}\u0026#34; taskResult=\u0026#34;Success\u0026#34;\u0026gt; \u0026lt;Assert name=\u0026#34;Assert: Container Variable == 42\u0026#34; expectedResult=\u0026#34;42\u0026#34; testBefore=\u0026#34;false\u0026#34; expression=\u0026#34;false\u0026#34;\u0026gt; \u0026lt;VariableCommand name=\u0026#34;ContainerVariable\u0026#34; operation=\u0026#34;Get\u0026#34; value=\u0026#34;\u0026#34; /\u0026gt; \u0026lt;/Assert\u0026gt; \u0026lt;/Test\u0026gt; You can run the test using the GUI or the console test runner. In both situations, you have the simple report of the test suite outcome.\nThe assertion error you see above is expected to fail as I check the variable out of the test\u0026rsquo;s scope.\nTesting is useful\rWorking more and more with the tests I found them easy to write and my brain started to think about more and more things I could test. And it helped me to correct my testing package. I started not just testing the existing part of the package - I started test-driven development.\nI wrote the test for SQL Task that didn\u0026rsquo;t exist yet, then I prepared that task using Ctrl+C, Ctrl+V form another SQL Task (don\u0026rsquo;t tell me that you don\u0026rsquo;t do it!), edited the parts of it and run the tests. And the test failed. Because it found, that I didn\u0026rsquo;t change the variable name for the outcome of the script.\nThis post is just an introduction to SSIS testing. In the next posts, I will show you how to start with writing the first ssisUnit tests and slowly beginning to do something more complicated.\n",
"ref": "/2018/03/19/testing-ssis-projects-with-ssisunit/"
},{
"title": "Series",
"date": "",
"description": "",
"body": "Some of the posts on this blog are the series about one subject. There are:\nUpgrading to SSIS 2017\rSo, you want to migrate SSIS(DB)? Upgrading SSIS projects, part I Upgrading SSIS projects, part II Upgrading SSIS projects, part III SSIS testing with ssisUnit\rTesting SSIS Projects with ssisUnit Writing first tests with ssisUnit Testing database connections with ssisUnit Using Connections and Datasets in ssisUnit Using cached datasets in ssisUnit Executing ssisUnit tests in MSTest framework Setting package references in ssisUnit Testing the loops in ssisUnit Writing ssisUnit test using API Working with properties in ssisUnit Power BI + SSIS + Graphs in SQL Server 2017\rLearn something new – Power BI + SSIS + SQL Server 2017 Graphs Learning something new: connections in SSIS package Learning something new: getting information from SSIS packages with PowerShell (on hold) ",
"ref": "/series/"
},{
"title": "Upgrading SSIS projects - part III",
"date": "",
"description": "",
"body": "In the first part of the series I mentioned two methods of upgrading SSIS projects (well - packages, for now) - Application.Upgrade() and Application.SaveAndUpdateVersionToXml(). This post is about the latter.\nThe documentation of the method is also a bit sparse at the moment, but is self-explanatory:\npublic void SaveAndUpdateVersionToXml ( string fileName, Microsoft.SqlServer.Dts.Runtime.Package package, Microsoft.SqlServer.Dts.Runtime.DTSTargetServerVersion newVersion, Microsoft.SqlServer.Dts.Runtime.IDTSEvents events ); the name of the target file - that\u0026rsquo;s where we save the outcome of the update operation (fileName) the package we want to convert (package) which SSIS version we have in mind (newVersion) an object for the events that happened during the process (events) To load the package I use the Application.LoadPackage() method. It reads package from the file and converts it to the object. Then set target version with the Application.TargetServerVersion and run Application.SaveAndUpdateVersionToXml(). The last thing is to create an empty class for the events, and that\u0026rsquo;s it.\nusing System.IO; using Microsoft.SqlServer.Dts.Runtime; namespace SaveAndUpdateVersionToXML { class Program { static void Main(string\\[\\] args) { // packages to upgrade System.Collections.ArrayList packages = new System.Collections.ArrayList(); // SSIS project directory (to load packages from) string sourceDirectory = @\u0026#34;C:\\\\Users\\\\Administrator\\\\source\\\\repos\\\\MigrationSample\\\\\u0026#34;; // target directory (to save migrated packages) string targetDirectory = @\u0026#34;C:\\\\Users\\\\Administrator\\\\source\\\\repos\\\\MigrationSample.Migrated\\\\\u0026#34;; // add the packages; it\u0026#39;s an example, so I\u0026#39;m adding manualy packages.Add(\u0026#34;Package.dtsx\u0026#34;); packages.Add(\u0026#34;ScriptMigrationTesting.dtsx\u0026#34;); // the events container MyEvents e = new MyEvents(); // we use the appliction object for migration Application a = new Application(); // load and upgrade packages foreach (string package in packages) { // load the package Package p = a.LoadPackage(Path.Combine(sourceDirectory, package), e); // and save to the target location a.SaveAndUpdateVersionToXml(Path.Combine(targetDirectory, package), p, DTSTargetServerVersion.SQLServer2017, e); } } } class MyEvents : DefaultEvents { } } There is less code than in the Application.Upgrade() example, but let\u0026rsquo;s take a closer look at both. There I used two variables for source and target locations of packages. They could be the counterparts of the StorageInfo objects storeinfoSource and storeinfoDest in the ApplicationUpgrade() example. My packages collection is a less specialised version of the UpgradePackageInfo objects collection.\nIt looks like the main difference is that I don\u0026rsquo;t set up the upgrade options, just tell the target version of the package. Previously, the target version was based on the version of the assembly I used in the project (thing to remember: Application.ComponentStorePath).\nSo it is just another approach to convert the packages to the version we want. But - as in the previous part - it\u0026rsquo;s just the packages. The project file is still the same, and when you open it in the Visual Studio it automatically downgrades the packages. Next part of the series will be about the project file itself.\nThe code is also available on GitHub.\n",
"ref": "/2018/02/28/upgrading-ssis-projects-part-iii/"
},{
"title": "Upgrading SSIS projects, part II",
"date": "",
"description": "",
"body": "The problem I want to solve is automation of the SSIS project upgrade. Previously I wrote about the options to use Application.Upgrade() or Application.SaveAndUpdateVersionToXml() methods. This post is about the first of those options.\nIf you take a look at the documentation link provided above you will see just the information about the function and its parameters, nothing more. Luckily, at the time of writing there is another version of the documentation on MSDN: https://msdn.microsoft.com/en-us/library/microsoft.sqlserver.dts.runtime.application.upgrade.aspx with the example of upgrading the packages stored in the filesystem. If the page disappears, I have a backup copy on my GitHub:\nGreat! So, we just copy the example, change the filenames and folders to reflect our project location, and that\u0026rsquo;s it!\nNo, it isn’t. It doesn\u0026rsquo;t work.\nFailed to backup the old package: The given path\u0026rsquo;s format is not supported.\nThe method works, but we need to tweak it a bit. Let\u0026rsquo;s dig a bit more into the API and the example.\nI will use a sample migration project with one package, that creates a table, generates some numbers and inserts them into the created table and at the end, it drops the table. It uses one project connection manager to the tempdb database. The link to the download is at the end of the post.\nSide note: when I test the following code I set up the breakpoint on the app.Upgrade() line and analyse the output object, mostly looking at the Failures collection.\nAlso – when you set up the reference to the Microsoft.SqlServerManagedDTS.dll (located in C:\\Windows\\Microsoft.NET\\assembly\\GAC_MSIL\\Microsoft.SqlServer.ManagedDTS) be sure to check the version you are planning to use. For example – if you check the version 14.0 (v4.0_14.0.0.0__89845dcd8080cc91 subfolder) you will be able to migrate ONLY to SQL Server 2017. If you want to migrate to SQL Server 2016 use version 13.0 (v4.0_13.0.0.0__89845dcd8080cc91 subfolder). The library version is important as it sets up the Application.ComponentStorePath property – the folder to tasks and components of SSIS (e.g. C:\\Program Files (x86)\\Microsoft SQL Server\\140\\DTS for SQL Server 2017)\nApplication.Upgrade()\rFirst things first: take a look at the Application.Upgrade() at the end of the script. What are the parameters of the Upgrade() method:\npublic Microsoft.SqlServer.Dts.Runtime.UpgradeResult Upgrade ( System.Collections.Generic.IEnumerable\u0026lt;Microsoft.SqlServer.Dts.Runtime.UpgradePackageInfo\u0026gt; packages, Microsoft.SqlServer.Dts.Runtime.StorageInfo source, Microsoft.SqlServer.Dts.Runtime.StorageInfo destination, Microsoft.SqlServer.Dts.Runtime.BatchUpgradeOptions options, Microsoft.SqlServer.Dts.Runtime.IDTSEvents events ); the packages we want to upgrade (packages) location of these packages (source) where we want to save the upgraded packages (destination) some options for the upgrading process (options) an object for the events that happened during the process (events) The method returns the UpgradeResult object with the results of the upgrade for each package. The earlier parts of the script are preparing the Application object and all parameters for the Upgrade() method.\nWhy the example doesn\u0026rsquo;t want to work? The UpgradePackageInfo() method documentation - states that we provide the names and the full paths of the packages. The StorageInfo class documentation - the property RootFolder gets or sets the path to the folder where we back up the packages. When we set the RootFolder property it is prepended to the package path provided in the UpgradePackageInfo() – so we have an invalid path. So either we have to remove either RootFolder property or leave just the names of the packages in UpgradePackageInfo(). When you test both options you know, you have to use RootFolder property, otherwise you get an error:\nFailed to backup the old package: A source root folder is not specified.\nI change then the UpgradePackageInfo() and put just the file name. To make things consistent, I will also use the RootFolder for the target location:\nUpgradePackageInfo packinfo1 = new UpgradePackageInfo(\u0026#34;Package.dtsx\u0026#34;, \u0026#34;Package.dtsx\u0026#34;, null); StorageInfo storeinfoDest = StorageInfo.NewFileStorage(); storeinfoDest.RootFolder = \u0026#34;C:\\\\tmp\\\\MigrationSample\u0026#34;; If you run the code, you still get an error, but this time it’s different. This time the Failures collection contains 5 error messages:\n\u0026ldquo;The package format was migrated from version 6 to version 8. It must be saved to retain migration changes.\\r\\n\u0026rdquo; \u0026ldquo;The connection \\\u0026quot;{CE71E990-4590-4AB1-998B-E7AA9C87DE35}\\\u0026rdquo; is not found. This error is thrown by Connections collection when the specific connection element is not found.\\r\\n\u0026quot; \u0026ldquo;The connection \\\u0026quot;{CE71E990-4590-4AB1-998B-E7AA9C87DE35}\\\u0026rdquo; is not found. This error is thrown by Connections collection when the specific connection element is not found.\\r\\n\u0026quot; \u0026ldquo;Succeeded in upgrading the package.\\r\\n\u0026rdquo; \u0026ldquo;The loading of the package Package.dtsx has failed.\u0026rdquo; It looks like the package was migrated from version 6 to 8 [1], the package was upgraded with success [4], but I had the problem with the connection [2], [3] and had some problems with loading the Package.dtsx (it’s about the source package name, not the target).\nSo – did the package upgrade or not? Not. The source package is still the same – when you check the source of the package you see \u0026lt;DTS:Property DTS:Name=\u0026quot;PackageFormatVersion\u0026quot;\u0026gt;6\u0026lt;/DTS:Property\u0026gt;.\nThe problem with the connection is because it uses connection manager at the project level, not the package level. How can we use the information stored in the project file? The BatchUpgradeOptions class has the ProjectPath property where we can set (and get) the full path to the .dtproj file.\nupgradeOpts.ProjectPath = \u0026#34;C:\\\\tmp\\\\MigrationSample\\\\MigrationSample.dtproj\u0026#34;; Now the example works but gives a warning instead of success. The warning contains four messages:\n\u0026ldquo;Failed to decrypt an encrypted XML node. Verify that the project was created by the same user. Project load will attempt to continue without the encrypted information.\u0026rdquo; \u0026ldquo;Failed to decrypt sensitive data in project with a user key. You may not be the user who encrypted this project, or you are not using the same machine that was used to save the project. If the sensitive data is a parameter value, the value may be required to run the package on the Integration Services server.\u0026rdquo; \u0026ldquo;The package format was migrated from version 6 to version 8. It must be saved to retain migration changes.\\r\\n\u0026rdquo; \u0026ldquo;Succeeded in upgrading the package.\\r\\n\u0026rdquo; The source SSIS project was created with default protection level EncryptSensitiveWithUserKey. The user key is created for the user and the machine that package was created (or edited), so when we try to open the package on another machine, it has the problem with decryption of the sensitive data. This project has no sensitive data, so it’s just a standard behaviour of SSIS and we can ignore the warnings.\nUpgradeOptions\rTo set some options for the upgrade process we used the BatchUpgradeOptions class. That’s why we have an additional SSISBackupFolder in our project’s location (we used BackupOldPackages = true so it creates backup copy of each package in the default subfolder). But we can also set the options for the packages using the PackageUpgradeOptions class.\nThe Application class has the property PackageUpgradeOptions of type – you guessed right: PackageUpgradeOptions. We create new object of that class, set the properties and assign it to the Application.PackageUpgradeOptions:\nPackageUpgradeOptions pkgUpgradeOpts = new PackageUpgradeOptions { RegeneratePackageID = true, UpgradeConnectionProviders = true }; app.PackageUpgradeOptions = pkgUpgradeOpts; All of the above can be set in the SSISUpgrade.exe in the configuration window:\nSummary\rThe Application.Upgrade() method works great. It\u0026rsquo;s the same way that we do the upgrade using SSISUpgrade.exe, but we have set it up a bit different than stated in the documentation.\nOne thing to watch - the package may be migrated with success, but the info about it can be stated in the Warnings section of the output.\nAdditional materials\rAll source files are available on GitHub.\n",
"ref": "/2018/02/04/upgrading-ssis-projects-part-ii/"
},{
"title": "Upgrading SSIS projects, part I",
"date": "",
"description": "",
"body": "In the previous post, I wrote about migrating SSISDB database. When we migrate the database the packages still have the version of the source SSIS catalog. When you start the execution of the migrated package, you get the information like \u0026ldquo;The package was migrated from version 6 to version 8. It must be saved to retain migration changes.\u0026rdquo;\nThis information is written to the log no matter which logging level we choose (also with None).\nThe question is: will it blend should we upgrade the packages (or better – the projects)? And if the answer is \u0026lsquo;yes\u0026rsquo; – why should we do it and what are the options?\nShould we upgrade the packages?\rYes. Read on.\nWhy should we do it?\rLooking at the times of the upgrade (it takes milliseconds) we can live with automatic version migrations during thousands of executions. So – is there any gain if we retain it?\nLet\u0026rsquo;s take a closer look at the SSIS Toolbox. We are migrating to SSIS 2017 from the lower version, let’s say the source is SSIS 2012. Open SQL Server Data Tools (for Visual Studio 2015 or 2017, does not matter for now) and load your project. I will use SSDT for VS 2017 with sample project created for SSIS 2012. See the SSIS Toolbox for the project in version SSIS 2012? There is a Script Task following an FTP Task.\nI will upgrade the SSIS project to the latest version (and write more about it in few lines) and take a look at the SSIS Toolbox now.\nNow we can see additional tasks for Hadoop. Upgrading the project does at least two things that are interesting to us: it uses the latest versions of the tasks and components, but also introduces the new elements to use.\nOK, what are the options?\rBasically, you have two:\nUse \u0026ldquo;Upgrade All Packages\u0026rdquo; option when you right-click the \u0026ldquo;SSIS Packages\u0026rdquo; node in the project – it starts the SSIS Package Upgrade Wizard\nSet \u0026ldquo;Target Server Version\u0026rdquo; to SQL Server 2017 when you use \u0026ldquo;Properties \u0026gt; Configuration properties \u0026gt; General\u0026rdquo; page of the project configuration\nWhat is the difference and what are the potential problems? You can read more in the blog post by Hans Michiels, but from my perspective, it sums up to those situations:\nif you set target server version, you also upgrade the .dtproj file and the connection managers (if needed) if you use target server version you don’t see the upgrade errors – you know them when you try to open the package if you use target server version, you don’t have the automatic backup of the packages (but you are warned before the upgrade) you won’t upgrade the package when there is even one error when you use the SSIS Package Upgrade Wizard Personally, I use the setting the Target Server Version approach as I also want to upgrade the .dtproj file. When you use the wizard, you have to change the project file by hand – if you don’t SSDT will automatically downgrade the packages to the lower version to be in line with the project version.\nThe problem\rGreat, we can migrate the packages using SSDT. But what if you have a lot of projects (say – 50) and there is a chance you will have to do the process few times? There is no external tool to migrate the project from the command line (the wizard has no options to run from the cmd - it just opens the application EDIT 29.12.2017: SSISUpgrade.exe has commandline switches, but only for automatic setting of the wizard options), so I started a small investigation.\nUnder the hood, the two processes are a bit different. For now, it looks like using Target Server Version way uses the same mechanism as automatic migration, and the SSIS Package Upgrade Wizard might use methods from the Application class from the Microsoft.SqlServer.Dts.Runtime namespace. I\u0026rsquo;m not 100% sure about it, but after some time with observations of those two processes, I think this is it. I don\u0026rsquo;t use sophisticated tools, don\u0026rsquo;t know yet how to use WinDbg to observe the Windows processes, I just watched the assemblies loaded using the Process Explorer and read methods of the files in the Visual Studio (added references to assemblies or executables and analysed them with Object Browser).\nIf you observe the libraries loaded to SSDT during the setting the target version, you will see it loads Microsoft.SqlServer.PackageFormatUpdate.dll (of course it uses a lot more, but this one has a significant name) that relies on Microsoft.SqlServer.DTSRuntimeWrap. The wizard does not use this library. It uses (besides other .dll files) Microsoft.SqlServer.ManagedDTS and Microsoft.SqlServer.DTSRuntimeWrap.\nAfter reading the documentation of the Application class, I think there is quite an easy way to upgrade the packages automatically with the Application.Upgrade() or Application.SaveAndUpdateVersionToXml() methods. But this is the subject of the second part of the SSIS migration posts.\n",
"ref": "/2017/12/24/upgrading-ssis-projects-part-i/"
},{
"title": "T-SQL Tuesday #96: Folks Who Have Made a Difference",
"date": "",
"description": "",
"body": "This post is a part of T-SQL Tuesday series started by Adam Machanic in 2009.\nEwald Cress (b | t) asked to give a shout-out to people (well-known or otherwise) who have made a meaningful contribution to your life in the world of data.\nAs I was thinking about the people that had the most impact on my career I have to give the biggest kudos to my friend, Leszek Kwaśniewski (t). About six years ago I attended the SQL Server courses about T-SQL programming and database administration where Leszek was one of the trainers. During one of the breaks, he told about user group meetings about SQL Server and invited everybody to come.\nAt that time I only listened, and because I wasn’t interested, I didn’t come. And then forgot about it.\nFast forward three years. Because of some circumstances in my job I reminded myself of the conversation with Leszek and searched Google to find where are the meetings. I also wrote to Leszek to get information if I can attend and so on. It was February 2014. Since then I missed only one group meeting (because of the clash with SQLBits this year that I attended).\nIt was a huge turnaround for me. I wasn’t aware of the power of the SQL community. It started with just attending the meetings; then I was encouraged (Leszek, again) to present some subject at the group meeting (and I never did a public speaking thing before!). After that, it just started getting faster – getting involved in Polish SQL Server User Group (PLSSUG) community (currently: Data Community Poland), presenting at the conferences, co-organizing SQLDay – the most significant SQL Server conference in Poland (and one of the biggest in Europe), getting involved in international activities (like volunteering on SQL Saturdays) …\nThen I have to mention Grant Fritchey (b | t) (and Sajal Dam, but since then I remember only Grant Fritchey). A long time ago I had access to an online library of IT books (Books24x7), with SQL Server titles among them. One of them was SQL Server 2008 Query Performance Tuning Distilled. And it was like enlightenment. For the first time I saw a book that explained so many magic things in such a simple manner. I started to understand the SQL Server better and wanted to learn more. The book was so great, that I said to myself, that I have to have it in print. It was the first time I ordered a book in print that was in English (waited about a month to arrive from the US to Poland). So it was an exceptional moment for me when I could meet Grant Fritchey in person during SQLDay 2016.\nI still have that book and always forget to take it with me to the conferences and ask Grant Fritchey to sign it for me. Maybe next time.\nThe above were turning points in my career, but I have to write about three people that pushed me forward. The nice coincidence - I met them during SQL Saturdays in Vienna.\nThe first person is Cathrine Wilhelmsen (b | t). I’ve heard of BIML before, but never had a real use case scenario for it or the need to find out more about it. Until I went to Cathrine’s session about BIML basics (SQLSatVienna 2016). For the first time I got inspired to actually start with BIML. A few months later I went to her session (SQLSatOslo 2016) to learn about advanced BIML topics. Today I build a lot of SSIS processing using BIML. Because of Cathrine and her passion.\nThe last, but not least – I can\u0026rsquo;t thank enough Chrissy LeMaire (b | t) and Rob Sewell (b | t). At the beginning of this year, I attended their full-day PowerShell workshop (SQLSatVienna 2017). And it was great – Chrissy and Rob know the stuff, know how to teach this stuff and to inspire people to do more. Before the workshop I used PowerShell for a script or two, but nothing special. After the workshop and talks with Chrissy and Rob I started writing more code, and learn more about PowerShell. After weeks of more intensive use of it, I extended and wrote some tools that automate SSIS deployments in my everyday job. Then I was brave enough to write a function for dbatools that was accepted as a pull request and merged into the core module. I don’t have the words to describe how great I felt at that moment.\nIt\u0026rsquo;s tough to pick just a few names and to skip the other (the latter being really hard). I thank all of the #SQLFamily I met over the last years. Each one of you made smaller or bigger impact on my career.\n",
"ref": "/2017/11/14/t-sql-tuesday-96-folks-who-have-made-a-difference/"
},{
"title": "So, you want to migrate SSIS(DB)?",
"date": "",
"description": "",
"body": "Excellent! I wanted to, and after few trials and errors I finally did! And it\u0026rsquo;s pretty easy (as with all things you know when you learn it). For a start I will warn you a bit - SSISDB isn\u0026rsquo;t the database you just backup on one server and restore on another. There are some more steps to do.\nThe same procedure will work for migrations from version 2012 to 2017 or 2016 to 2017, I didn\u0026rsquo;t check (yet) 2014 to 2017. I also tested the migration of 2012 to 2016 version but had some problems with this when it came to upgrading database part. Will investigate it later (probably some problems with SSMS or SQL2016 installation).\nI won\u0026rsquo;t cover all the scenarios. I found few blog posts explaining migration and official documentation is also good. But it took me some time to distil the things that suit my scenario and to understand better why do I need to do them. At the end of the post, I write why I didn\u0026rsquo;t use Copy-DbaSsisCatalog from dbatools.\nThe setup\rLet\u0026rsquo;s do an example setup. I have Integration Services catalog with one folder (Test) that contains one environment (no projects) with two variables:\nString1 (type: String) = 123 String2 (type: String) = 123 (sensitive, encrypted)\nFirst thing - the backup and restore procedure works fine. You can see the folders and projects in Integration services catalog and SSISDB database. The problem starts when you run the package, start validation or want to check the sensitive variables. I will check the last one using dbatools and Get-DbaSsisEnvironmentVariable command (replace \u0026lt;instancename\u0026gt; with your instance if you\u0026rsquo;re using named instances):\nGet-DbaSsisEnvironmentVariable -SqlInstance \u0026#39;\u0026lt;instancename\u0026gt;\\\\SQL2016\u0026#39; -Environment PROD | Select-Object Name,Type,IsSensitive,Value | Format-Table The result:\nI selected only four attributes presented as a table for better readability. Everything is OK. So - back up the database on source instance (.\\SQL2016), restore on another instance (.\\SQL2017) and run again the Get-DbaSsisEnvironmentVariable.\n-- .\\SQL2016 USE [master] BACKUP DATABASE [SSISDB] TO DISK = N\u0026#39;C:\\tmp\\SSISDB2016.bak\u0026#39; WITH DESCRIPTION = N\u0026#39;SSISDB Migration demo - SQL2016\u0026#39;, NOFORMAT, NOINIT, NAME = N\u0026#39;SSISDB-Full Database Backup\u0026#39;, SKIP, NOREWIND, NOUNLOAD, COMPRESSION, STATS = 20, CHECKSUM ; GO -- .\\SQL2017 USE [master] RESTORE DATABASE [SSISDB] FROM DISK = N\u0026#39;C:\\tmp\\SSISDB2016.bak\u0026#39; WITH FILE = 1, MOVE N\u0026#39;data\u0026#39; TO N\u0026#39;C:\\Program Files\\Microsoft SQL Server\\MSSQL14.SQL2017\\MSSQL\\DATA\\SSISDB.mdf\u0026#39;, MOVE N\u0026#39;log\u0026#39; TO N\u0026#39;C:\\Program Files\\Microsoft SQL Server\\MSSQL14.SQL2017\\MSSQL\\DATA\\SSISDB.ldf\u0026#39;, NOUNLOAD, STATS = 25 ; GO The result:\nTo get the error message in PowerShell I used a trick I found on Shane\u0026rsquo;s blog post a while ago and just printed the full exception:\n$Error[0].Exception.ToString() Please create a master key in the database or open the master key in the session before performing this operation. The key MS_Enckey_Env_2 is not open. Please open the key before using it.\nHmm, what should we do?\nDatabase Master Key\rLet\u0026rsquo;s begin with a little reminder - what happens when you create Integration Services Catalog? Besides enabling CLR integration (first checkbox) or enabling running [catalog.startup](https://docs.microsoft.com/en-us/sql/integration-services/system-stored-procedures/catalog-startup) procedure on instance restart to fix running processes status (second checkbox) you create the encryption key. Precisely - Database Master Key. You can take a backup of that key and use it after restoring on another instance, but let\u0026rsquo;s skip this for now.\nWhat is a Database Master Key (DMK)? It\u0026rsquo;s the main key used for encryption in the database. It\u0026rsquo;s a symmetric key, which means it uses one password to encrypt and decrypt the objects. By default it is encrypted using Service Master Key (SMK) and a password - that\u0026rsquo;s why we provide the password. In SSISDB the DMK is used to encrypt certificates used in the projects and the environments.\nWe are encouraged to write down the password somewhere, but sometimes we forget where we put it. Or the person responsible for administration doesn\u0026rsquo;t work in the company any longer? What do we do, when we need to move the database to another server and we have no pasword for the key?\nWe panic.\nOr not. There is a nice thing in the documentation (my emphasis):\nFor SQL Server and Parallel Data Warehouse, the Master Key is typically protected by the Service Master Key and at least one password. In case of the database being physically moved to a different server (log shipping, restoring backup, etc.), the database will contain a copy of the master Key encrypted by the original server Service Master Key (unless this encryption was explicitly removed using ALTER MASTER KEY DDL), and a copy of it encrypted by each password specified during either CREATE MASTER KEY or subsequent ALTER MASTER KEY DDL operations.\nWe can use more than one password to encrypt DMK. It\u0026rsquo;s a great help for us if we don\u0026rsquo;t remember the password - it makes the migration easier. There are more use cases for multiple passwords for DMK like password rotation related to retention policy.\nWe may also remove the encryption with Service Master Key and use only the password, but then we always have to use OPEN MASTER KEY before decrypting the data. Using SMK performs this operation for us in the background.\nBack to the migration\rThe database backup contains also the DMK with all its passwords. And if we didn\u0026rsquo;t change the defaults (I didn\u0026rsquo;t) the DMK is also encrypted with SMK. The SSISDB restore operation is made on another instance with different SMK, so there is a problem with decrypting the DMK - it\u0026rsquo;s encrypted with SMK from original server and we try to decrypt DMK with SMK from new server - it doesn\u0026rsquo;t work. One more time - part of the error:\nPlease create a master key in the database or open the master key in the session before performing this operation.\nIf we can\u0026rsquo;t open the DMK with the SMK, we have to open it manually using OPEN MASTER KEY with the password. Then we can sign the DMK with the SMK from the new server. If you know the password - just use it. If not - alter the DMK on the original server by adding new password (optionally removing the binding from SMK) before taking the backup and use it on new server. The T-SQL code looks like this (let\u0026rsquo;s say I don\u0026rsquo; t remember/know the password for the DMK in SSISDB):\n-- .\\SQL2016 USE SSISDB; GO -- I forgot the password, add a new one ALTER MASTER KEY ADD ENCRYPTION BY PASSWORD = \u0026#39;Migration_Password123!\u0026#39; ; -- backup the database with DMK signed with new password USE [master] BACKUP DATABASE [SSISDB] TO DISK = N\u0026#39;C:\\tmp\\SSISDB2016.bak\u0026#39; WITH DESCRIPTION = N\u0026#39;SSISDB Migration demo - SQL2016\u0026#39;, NOFORMAT, NOINIT, NAME = N\u0026#39;SSISDB-Full Database Backup\u0026#39;, SKIP, NOREWIND, NOUNLOAD, COMPRESSION, STATS = 20, CHECKSUM ; GO -- .\\SQL2017 USE [master] -- restore from backup, nothing new RESTORE DATABASE [SSISDB] FROM DISK = N\u0026#39;C:\\tmp\\SSISDB2016.bak\u0026#39; WITH FILE = 1, MOVE N\u0026#39;data\u0026#39; TO N\u0026#39;C:\\Program Files\\Microsoft SQL Server\\MSSQL14.SQL2017\\MSSQL\\DATA\\SSISDB.mdf\u0026#39;, MOVE N\u0026#39;log\u0026#39; TO N\u0026#39;C:\\Program Files\\Microsoft SQL Server\\MSSQL14.SQL2017\\MSSQL\\DATA\\SSISDB.ldf\u0026#39;, NOUNLOAD, STATS = 25 ; GO -- open the DMK using the password I just added on original server OPEN MASTER KEY DECRYPTION BY PASSWORD = \u0026#39;Migration_Password123!\u0026#39; -- encrypt the DMK with SMK of the new server ALTER MASTER KEY ADD ENCRYPTION BY SERVICE MASTER KEY ; -- tidy up CLOSE MASTER KEY ; Now I can read SSIS environment variables on the new server. Congratulations, migration successful.\nGet-DbaSsisEnvironmentVariable -SqlInstance \u0026#39;\\SQL2017\u0026#39; -Environment PROD | Select-Object Name,Type,IsSensitive,Value | Format-Table Last thing - at the end we should drop the password we used for migration. We can do this using command\nALTER MASTER KEY DROP ENCRYPTION BY PASSWORD = \u0026#39;Migration_Password123!\u0026#39; ; This is just a migration of SSISDB. Running the packages, upgrading them, solving problems - it\u0026rsquo;s something to cover in another post.\ndbatools\rThere is a Copy-DbaSsisCatalog command in dbatools. It copies the data from one SSIS catalog to another. The thing is - it reads the objects on source server and recreates them on destination server without some things that are crucial to me:\nit creates new objects in the target server, meaning they (may) have different identifiers than in source SSISDB it does not copy environment references - I have to recreate them manually - and I also have to fix my jobs that use the environment reference id in the definition it does not copy encrypted variables; well it does, but it does not copy the sensitive value (the decrypted value is an empty string) I have noticed problems with copying projects on few occasions, but haven\u0026rsquo;t looked into it So the backup/restore/alter DMK procedure is OK for me.\n",
"ref": "/2017/11/06/so-you-want-to-migrate-ssisdb/"
},{
"title": "Learning something new: getting information from SSIS packages with PowerShell",
"date": "",
"description": "",
"body": "In the series of learning something new, I started with analysing of the SSIS package XML. I know what I want to extract, so let the fun begin. I will use Powershell to get the data from the .dtsx files and save it to the database. The whole script is presented below with comments. For more information scroll down.\n# I will use Out-DbaDataTable and Write-DbaDataTable from dbatools, so import it Import-Module dbatools # find recursively all Executable nodes in .dtsx file function Find-Executables($fileName, $executable) { foreach($item in $executable) { # are we there, yet? (if no - recursion) if($item.Executables) { Find-Executables $fileName $item.Executables.Executable } # if yes - the result $prop = @{ \u0026#39;refId\u0026#39; = $item.refId \u0026#39;creationName\u0026#39; = $item.CreationName \u0026#39;description\u0026#39; = $item.Description \u0026#39;objectName\u0026#39; = $item.ObjectName \u0026#39;executableType\u0026#39; = $item.ExecutableType \u0026#39;objectData\u0026#39; = $item.ObjectData.ExpressionTask.Expression } $prop } } # get all Precedence Constraints; simpler than Executables, because all of them are in single node function Find-PrecedenceConstraints($fileName, $precedenceConstraints) { foreach($item in $precedenceConstraints) { $prop = @{ \u0026#39;refId\u0026#39; = $item.refId \u0026#39;from\u0026#39; = $item.From \u0026#39;to\u0026#39; = $item.To \u0026#39;logicalAnd\u0026#39; = [boolean]$item.LogicalAnd \u0026#39;evalOp\u0026#39; = [int]$item.EvalOp \u0026#39;objectName\u0026#39; = $item.ObjectName \u0026#39;expression\u0026#39; = $item.Expression \u0026#39;value\u0026#39; = [int]$item.Value } $prop } } # the data collectors $allPackagesInfo = @() $allExecutables = @() $allPrecedenceConstraints = @() # loop through every .dtsx file in folder; all my packages\u0026#39; names start with 0, so there is an example how to filter it foreach($file in (Get-ChildItem C:\\Users\\brata_000\\Source\\Repos\\SSIS_Graph\\SSIS_Graph\\SSIS_Graph\\* -Include 0*.dtsx)) { # read .dtsx into XML variable [xml] $pkg = Get-Content $file # create hash table with package information $pkgInfo = @{ \u0026#39;Name\u0026#39; = $file.Name \u0026#39;Executables\u0026#39; = Find-Executables $file.Name $pkg.Executable \u0026#39;PrecedenceConstraints\u0026#39; = Find-PrecedenceConstraints $file.Name $pkg.Executable.PrecedenceConstraints.PrecedenceConstraint } # add the table as PSobject to variable with all the package information $allPackagesInfo += New-Object psobject -Property $pkgInfo } # I don\u0026#39;t want to confirm TRUNCATE TABLE for Write-DbaDataTable (when I\u0026#39;m running it few times) $ConfirmPreference = \u0026#39;none\u0026#39; # Having all the information in one place - save it in the database; loop through all the packages (see the filter, again?) $allPackagesInfo | Where-Object -Property Name -Like \u0026#39;0*\u0026#39; | ForEach-Object { $pkgName = $_.Name # all the Executables in the package $_.Executables | ForEach-Object { $d = @{ \u0026#39;pkgName\u0026#39; = $pkgName \u0026#39;refId\u0026#39; = $_.refId \u0026#39;creationName\u0026#39; = $_.creationName \u0026#39;description\u0026#39; = $_.description \u0026#39;objectName\u0026#39; = $_.objectName \u0026#39;executableType\u0026#39; = $_.executableType \u0026#39;objectData\u0026#39; = $_.objectData } $allExecutables += New-Object psobject -Property $d } # all the Precedence Constraints in the package; casting to proper types will automaticaly create # columns of types other than nvarchar(max) when using -AutoCreateTable on Write-DbaDataTable $_.PrecedenceConstraints | ForEach-Object { $d = @{ \u0026#39;pkgName\u0026#39; = $pkgName \u0026#39;refId\u0026#39; = $_.refId \u0026#39;from\u0026#39; = $_.from \u0026#39;to\u0026#39; = $_.to \u0026#39;logicalAnd\u0026#39; = [boolean]$_.logicalAnd \u0026#39;evalOp\u0026#39; = [int\\]$_.evalOp \u0026#39;objectName\u0026#39; = $_.objectName \u0026#39;expression\u0026#39; = $_.expression \u0026#39;value\u0026#39; = [int]$_.value } $allPrecedenceConstraints += New-Object psobject -Property $d } } # I\u0026#39;m using SQL Server for Linux, connecting to the VM, so I use SQL authentication (for now) - why not use \u0026#39;sa\u0026#39; then? $cred = Get-Credential sa #save all Executables to the database $allExecutables | Out-DbaDataTable | Write-DbaDataTable -SqlInstance 127.0.0.1:14333 ` -SqlCredential $cred -Database SSISGraph -Schema dbo -Table Executables -AutoCreateTable -Truncate # save all Precedence Constraints to the database $allPrecedenceConstraints | Out-DbaDataTable | Write-DbaDataTable -SqlInstance 127.0.0.1:14333 ` -SqlCredential $cred -Database SSISGraph -Schema dbo -Table PrecedenceConstraints -AutoCreateTable -Truncate To get all the information I loop through all the files. In Control Flow the interesting data are Executable elements and Precedence Constraints. Because one Executable can contain another Executables (think: Sequence/For/ForEach Containers) I use recursion. It\u0026rsquo;s easier with Precedence Constraints because all of them are located under one XML node. Both functions get $fileName as the first parameter that is not used later. It\u0026rsquo;s because it\u0026rsquo;s one of the versions of the script and I didn\u0026rsquo;t want to remove it as I\u0026rsquo;m doing more tests for later.\nEach information is collected into an array $allPackagesInfo. It looks a bit overcomplicated, but it works. When data is collected I prepare two arrays with objects containing Executables and Precedence Constraints. I cast some of the values to the proper data types - they will be used when preparing data for the database.\nThat gets me to the database layer. To ease the whole process I use dbatools. It contains a lot of great functions, but for now I will only use these two: Out-DbaDataTable and Write-DbaDataTable. The first one prepares data in format that is understood by the second that writes data to the database.\nTo ease the data inserting process I use -AutoCreateTable switch for Write-DbaDataTable. It creates table in database using object created with Out-DbaDataTable. It uses first data row to guess data types and when not found (or it\u0026rsquo;s a string) it creates NVARCHAR(MAX) columns - hence I provide extra info for fields other than string. The (MAX) doesn\u0026rsquo;t bother me too much for now as I build the prototype to be expanded later. Also - I may repeat the loading process few times, so I clear the tables each time with a -Truncate parameter. It will ask me to confirm the data removal, so I set the $ConfirmPreference to none as I don\u0026rsquo;t want to do it.\nThere is one thing I don\u0026rsquo;t like. When using -AutoCreateTable one of the parameters you have to provide is schema of the table to be created. There is a bug that sometimes prevents Write-DbaDataTable from finding the schema in target database. So for now I use dbo schema.\nOK. When I run the script I get populated tables dbo.Executables and dbo.PrecedenceConstraints with fresh data.\nIf you want to go deeper into the .dtsx internals watch André Kamman\u0026rsquo;s (b | t) PASS Summit 2015 session \u0026ldquo;Analyzing your ETL Solution with PowerShell\u0026rdquo; (available to you for free on the PASS site).\nThe next step is to transform the gathered SSIS data to the graph format.\n",
"ref": "/2017/07/26/learning-something-new-getting-information-from-ssis-packages-with-powershell/"
},{
"title": "Connect to SQL Server on Ubuntu Linux VirtualBox machine",
"date": "",
"description": "",
"body": "For my everyday tests with SQL Server I use VirtualBox. SQL Server 2017 is/will be a huge thing - mostly because it will be available on Linux. If so - I should get comfortable with using it on Linux.\nI start with Ubuntu Server (because of the name - I used Ubuntu Desktop in the past), Installation of VM on VirtualBox comes down to adding ISO image as the CD-ROM (DVD-ROM?) and selecting almost only the default options. I don\u0026rsquo;t want to build a cluster or do sophisticated things - I just want the Linux Server to be up and running.\nIt takes just few minutes to install Ubuntu Linux Server (16.04) . The next step is to install SQL Server. Following the official documentation I run just 5 commands to set up and one to verify if the service works well:\ncurl https://packages.microsoft.com/keys/microsoft.asc | sudo apt-key add - curl https://packages.microsoft.com/config/ubuntu/16.04/mssql-server.list | sudo tee /etc/apt/sources.list.d/mssql-server.list sudo apt-get update sudo apt-get install -y mssql-server sudo /opt/mssql/bin/mssql-conf setup systemctl status mssql-server OK, looks easy (and it is!). It really takes few minutes to have SQL Server on Linux experience. Now: I want to connect to SQL Server on VM from my Host machine (Windows). I have my SSMS there, with SQLPrompt (so you understand why I want to use it) - I want to connect to the VM.\nBut.\nMy internal network is 192.168.x.x and Ubuntu Linux creates 10.0.2.15 IP address for the machine. I don\u0026rsquo;t use Linux that often (also not that often on VirtualBox) and I don\u0026rsquo;t want to try allthis things with iptables, ifconfig setup and so on. Is there some magic switch I could use?\nThe internet says: yes, there is - Port Forwarding.\nOpen the settings for the virtual machine, switch to Network tab and expand Advanced part. There is a Port Forwarding button. Don\u0026rsquo;t be shy, just click it.\nOn the Port Forwarding Rules setup window set things as on the picture: Host IP == 127.0.0.1, Host port - one of your choice - this is the port you will connect from Host VM. Guest IP means virtual machine IP and Guest Port is the port of service on VM. SQL Server runs on port 1433 on the Ubuntu Linux with address 10.0.2.15 and I forward it to my local machine to port 14333 (because I want to). The rule name is MSSQL to be clear what is it about, you can give any name you want.\nBest with VM turned off as I\u0026rsquo;ve read to restart the machine after the change (and didn\u0026rsquo;t check it too much, just followed the instructions).\nAnd that\u0026rsquo;s it! Now I use port 14333 (remember to use a comma!) in SSMS and I\u0026rsquo;m connected to VM with SQL Server on Linux.\n",
"ref": "/2017/06/28/connect-to-sql-server-on-ubuntu-linux-virtualbox-machine/"
},{
"title": "Learning something new: connections in SSIS package",
"date": "",
"description": "",
"body": "Starting to learn something new - first step. Let\u0026rsquo;s analyse the code of SSIS package. How does it store the information about the element connections? How can I get that data as graph\u0026rsquo;s edges and nodes? Step by step - building the packages from empty one to more complex I will find how they are stored.\nTo achieve this I will prepare the new SSIS Project and call it SSIS_Graph. It will get new packages each time I will want to check something new. For start I create an empty package, then package with one Control Flow element - I will use empty Data Flow Task and the Sequence Container, then I will create two empty Data Flow Tasks and connect them with one precedence constraint (success). And so on - more precedence constraints, constraints with expressions. I won\u0026rsquo;t paste the code for all the sample packages. It\u0026rsquo;s just to help me get more familiar with DTSX XML and find the patterns. At the beginning, I will concentrate on Control Flow elements. The main goal is to visualise packages dependencies within a project and then other - dependencies between projects or dependencies between package\u0026rsquo;s elements.\nWhen you take a look at the code of two empty Data Flow Tasks connected with precedence constraint you find those elements of interest:\nDTS:Executables - they contain - well - Executables DTS:Executable - the task on the Control Flow area DTS:PrecedenceConstraints - guess? DTS:PrecedenceConstraint - you guessed right! Each element has plenty of attributes, but we mostly care about:\nDTS:ObjectName - exactly what it says on the tin DTS:refId - the path to the element (task) DTS:From (for precedence constraints - starting point) DTS:To (for precedence constraints - finishing point) DTS:LogicalAnd (for precedence constraints - appears when it\u0026rsquo;s AND constraint; does not appear for OR constraints) DTS:value (for precedence constraints - appears for constraints other than Success) DTS:Expresion (when precedence constraint uses expression) DTS:EvalOp (Evaluation Operation? - used when precedence constraint uses an expression) For the graph data, I will take more of the attributes. For now, I just write down the most important of them. It\u0026rsquo;s enough to get me started. The next step: create graph tables.\n",
"ref": "/2017/06/25/learning-something-new-connections-in-ssis-package/"
},{
"title": "Learn something new - Power BI + SSIS + SQL Server 2017 Graphs",
"date": "",
"description": "",
"body": "Recently I attended the AppDev PASS Virtual Group webinar about graphs in SQL Server 2017. When the demo about car manufacturing structure appeared in Power BI (49th minute of the recording - using Force-directed graph plugin) the idea struck: how about visualising SSIS packages\u0026rsquo; relations using graphs and Power BI?\nMaybe it won\u0026rsquo;t work, maybe there are limitations I\u0026rsquo;m not aware right now, but I have to try. This post will serve as the gateway to the series of post where I will write about my findings on graphs in SQL Server, their visualisations, querying, structure defining, Power BI embracement and - of course - SSIS.\nFor a time of writing I know there are some limitations about graph querying for hierarchical data - but only until next CTP, so I will start slow - with defining the sample project and a graph itself. The rough roadmap below:\ncreating sample SSIS project with Sequence Containers, Execute Package Tasks, different precedence constraints internal SSIS package structure analysis - how to turn that data into graphs graph structure modelling and transforming the SSIS package data into graph data visualising it all in Power BI Looks surprisingly easy. Waiting for the first hurdle.\nSeries parts:\nLearn something new – Power BI + SSIS + SQL Server 2017 Graphs Learning something new: connections in SSIS package Learning something new: getting information from SSIS packages with PowerShell (on hold) ",
"ref": "/2017/06/14/learn-something-new-power-bi--ssis--sql-server-2017-graphs/"
},{
"title": "Get the passwords from SSIS Environment variables",
"date": "",
"description": "",
"body": "SSIS has a neat feature - it encodes all the sensitive data when you set it as - well - sensitive. It\u0026rsquo;s a great thing for environment variables - you write once, use it to configure the project and then you can forget the password. Or write it down for later use. In the future. Some day. Maybe.\nBut what if you need to do the configuration on another server? Or you\u0026rsquo;re just curious how to decode encrypted data? SSIS has to do it somehow, right?\nIf you\u0026rsquo;re not curious what happens under the hood then scroll down to see that you can get all the information in few lines of T-SQL code (max 3 if you know the environment\u0026rsquo;s ID and compress SELECT statement). If you want to know more - let\u0026rsquo;s start with investigating environment creation with SQL Profiler. I will create environment ENV1 in folder DWH_ETL.\nSSIS makes few standard calls before each operation (take a look at deployment details), but the most important thing is this part:\nEXEC [SSISDB].[catalog].[create_environment] @environment_name = N\u0026#39;ENV1\u0026#39;, @environment_description = N\u0026#39;\u0026#39;, @folder_name = N\u0026#39;DWH_ETL\u0026#39; Nothing surprising. But have you seen the catalog.create_environment\u0026rsquo;s body? Yeah, me too. So - we have to go deeper. Scrolling down 120 lines of code (user validation, environment existence check) we get to the point. It starts slowly:\nINSERT INTO [internal].[environments] VALUES (@environment_name, @folder_id, @environment_description, @caller_sid, @caller_name, SYSDATETIMEOFFSET()) SET @environment_id = SCOPE_IDENTITY() But then gets interesting:\nSET @encryption_algorithm = (SELECT [internal].[get_encryption_algorithm]()) -- ... skip some code ... SET @key_name = \u0026#39;MS_Enckey_Env_\u0026#39;+CONVERT(varchar,@environment_id) SET @certificate_name = \u0026#39;MS_Cert_Env_\u0026#39;+CONVERT(varchar,@environment_id) SET @sqlString = \u0026#39;CREATE CERTIFICATE \u0026#39; + @certificate_name + \u0026#39; WITH SUBJECT = \u0026#39;\u0026#39;ISServerCertificate\u0026#39;\u0026#39;\u0026#39; IF NOT EXISTS (SELECT [name] FROM [sys].[certificates] WHERE [name] = @certificate_name) EXECUTE sp_executesql @sqlString SET @sqlString = \u0026#39;CREATE SYMMETRIC KEY \u0026#39; + @key_name +\u0026#39; WITH ALGORITHM = \u0026#39; + @encryption_algorithm + \u0026#39; ENCRYPTION BY CERTIFICATE \u0026#39; + @certificate_name IF NOT EXISTS (SELECT [name] FROM [sys].[symmetric_keys] WHERE [name] = @key_name) EXECUTE sp_executesql @sqlString Aha! So, it\u0026rsquo;s creating the certificate and the symmetric key for the environment. Ergo - all encryption within the environment uses dedicated symmetric key encrypted using dedicated certificate.\nGreat - the second step - add a sensitive variable to this environment (using SSMS, let\u0026rsquo;s skip the screenshot). I\u0026rsquo;ll call it VAR1, type - string, value: 123. Dot excluded. SQL Profiler still running - we get this piece of code (of course there\u0026rsquo;s more, but I cut down to the interesting part):\nDECLARE @var sql_variant = N\u0026#39;123\u0026#39; EXEC [SSISDB].[catalog].[create_environment_variable] @variable_name=N\u0026#39;VAR1\u0026#39;, @sensitive=True, @description=N\u0026#39;\u0026#39;, @environment_name=N\u0026#39;ENV1\u0026#39;, @folder_name=N\u0026#39;DWH_ETL\u0026#39;, @value=@var, @data_type=N\u0026#39;String\u0026#39; First conclusion - the value provided has no strong type. It\u0026rsquo;s sql_variant. The String type is just an attribute to this value. It has its consequences later, but for now, it\u0026rsquo;s not that important. As earlier - we have to go deeper - how does catalog.create_environment_variable work?\nAgain - when we take a look inside of the procedure we see user validation, data type validation, environment and folder validation, permission checking and so on. And then we get to this piece of code for encrypted values (EncryptByKey, OPEN SYMMETRIC KEY, CLOSE SYMMETRIC KEY):\nIF (@sensitive = 1) BEGIN SET @key_name = \u0026#39;MS_Enckey_Env_\u0026#39;+CONVERT(varchar,@environment_id) SET @certificate_name = \u0026#39;MS_Cert_Env_\u0026#39;+CONVERT(varchar,@environment_id) SET @sqlString = \u0026#39;OPEN SYMMETRIC KEY \u0026#39; + @key_name + \u0026#39; DECRYPTION BY CERTIFICATE \u0026#39; + @certificate_name EXECUTE sp_executesql @sqlString IF @data_type = \u0026#39;datetime\u0026#39; BEGIN SET @binary_value = EncryptByKey(KEY_GUID(@key_name),CONVERT(varbinary(4000),CONVERT(datetime2,@value))) END ELSE IF @data_type = \u0026#39;single\u0026#39; OR @data_type = \u0026#39;double\u0026#39; OR @data_type = \u0026#39;decimal\u0026#39; BEGIN SET @binary_value = EncryptByKey(KEY_GUID(@key_name),CONVERT(varbinary(4000),CONVERT(decimal(38,18),@value))) END ELSE BEGIN SET @binary_value = EncryptByKey(KEY_GUID(@key_name),CONVERT(varbinary(4000),@value)) END SET @sqlString = \u0026#39;CLOSE SYMMETRIC KEY \u0026#39;+ @key_name EXECUTE sp_executesql @sqlString INSERT INTO [internal].[environment_variables] ([environment_id], [name], [description], [type], [sensitive], [value], [sensitive_value], [base_data_type]) VALUES (@environment_id, @variable_name, @description, @data_type, @sensitive, null, @binary_value, @variable_type) END ELSE BEGIN INSERT INTO [internal].[environment_variables] ([environment_id], [name], [description], [type], [sensitive], [value], [sensitive_value], [base_data_type]) VALUES (@environment_id, @variable_name, @description, @data_type, @sensitive, @value, null, @variable_type) END The sensitive variable is encrypted by symmetric key created during the environment creation. The value is converted to varbinary(4000) and encrypted using the EncryptByKey() function, but with three different paths, according to the type:\ndatetime - converts the value to datetime2, and then to varbinary(4000) number - converts the value to decimal(38, 18), and then to varbinary(4000) other - converts straight to varbinary(4000) At the end, the key is closed and the value is inserted into the internal.environment_variables table to the sensitive_variable column.\nA side note: SMO returns variables for environment from the catalog.environment_variables view, not the internal.environment_variables table. The view does not provide sensitive data.\nLooks easy - to decrypt the variable we need to do almost the same thing as with encryption, but instead of the EncryptByKey() function we have to use DecryptByKey(). Also - as with encryption - we have to consider different paths for data decryption - we\u0026rsquo;ll see why in a moment.\nTo simplify the code I\u0026rsquo;m assuming that my environment_id = 10. Then my encryption key and certificate have _10 suffix. First try: convert all to NVARCHAR(1000)\nOPEN SYMMETRIC KEY MS_Enckey_Env_10 DECRYPTION BY CERTIFICATE MS_Cert_Env_10; SELECT *, val = CONVERT(NVARCHAR(1000), DECRYPTBYKEY(sensitive_value)) FROM SSISDB.internal.environment_variables WHERE environment_id = 10 CLOSE SYMMETRIC KEY MS_Enckey_Env_10; Easy. But - it\u0026rsquo;s only one simple string variable. What happens when we have sensitive data of type - let\u0026rsquo;s say - int? Or float? Because - why not? Sensitive is an attribute of all types of data. Then we have to convert to the proper type. Let\u0026rsquo;s say we have an environment with lots of encrypted variables:\nIf we try to do something like this:\nOPEN SYMMETRIC KEY MS_Enckey_Env_10 DECRYPTION BY CERTIFICATE MS_Cert_Env_10; SELECT *, value = CASE base_data_type WHEN \u0026#39;nvarchar\u0026#39; THEN CONVERT(NVARCHAR(MAX), DECRYPTBYKEY(sensitive_value)) WHEN \u0026#39;bit\u0026#39; THEN CONVERT(bit, DECRYPTBYKEY(sensitive_value)) WHEN \u0026#39;datetime\u0026#39; THEN CONVERT(datetime2(0), DECRYPTBYKEY(sensitive_value)) -- some data types skipped END FROM SSISDB.internal.environment_variables WHERE environment_id = 10 CLOSE SYMMETRIC KEY MS_Enckey_Env_10; we get the errors of operand clash:\nMsg 206, Level 16, State 2, Line 2 Operand type clash: bit is incompatible with datetime2 It\u0026rsquo;s because we try to put more than one type in one column (value). We have to do one more conversion to common type. Let\u0026rsquo;s use NVARCHAR(MAX):\nOPEN SYMMETRIC KEY MS_Enckey_Env_10 DECRYPTION BY CERTIFICATE MS_Cert_Env_10; SELECT *, value = CASE base_data_type WHEN \u0026#39;nvarchar\u0026#39; THEN CONVERT(NVARCHAR(MAX), DECRYPTBYKEY(sensitive_value)) WHEN \u0026#39;bit\u0026#39; THEN CONVERT(NVARCHAR(MAX), CONVERT(bit, DECRYPTBYKEY(sensitive_value))) WHEN \u0026#39;datetime\u0026#39; THEN CONVERT(NVARCHAR(MAX), CONVERT(datetime2(0), DECRYPTBYKEY(sensitive_value))) WHEN \u0026#39;single\u0026#39; THEN CONVERT(NVARCHAR(MAX), CONVERT(DECIMAL(38, 18), DECRYPTBYKEY(sensitive_value))) WHEN \u0026#39;float\u0026#39; THEN CONVERT(NVARCHAR(MAX), CONVERT(DECIMAL(38, 18), DECRYPTBYKEY(sensitive_value))) WHEN \u0026#39;decimal\u0026#39; THEN CONVERT(NVARCHAR(MAX), CONVERT(DECIMAL(38, 18), DECRYPTBYKEY(sensitive_value))) WHEN \u0026#39;tinyint\u0026#39; THEN CONVERT(NVARCHAR(MAX), CONVERT(tinyint, DECRYPTBYKEY(sensitive_value))) WHEN \u0026#39;smallint\u0026#39; THEN CONVERT(NVARCHAR(MAX), CONVERT(smallint, DECRYPTBYKEY(sensitive_value))) WHEN \u0026#39;int\u0026#39; THEN CONVERT(NVARCHAR(MAX), CONVERT(INT, DECRYPTBYKEY(sensitive_value))) WHEN \u0026#39;bigint\u0026#39; THEN CONVERT(NVARCHAR(MAX), CONVERT(bigint, DECRYPTBYKEY(sensitive_value))) END, FROM SSISDB.internal.environment_variables WHERE environment_id = 10 CLOSE SYMMETRIC KEY MS_Enckey_Env_10; I used only 10 types of variables, but we all know there is more of the types. But - when I checked the base data type for each variable I used (and every type was used there) I got only these 10 types. So I\u0026rsquo;ll stick with them.\nBottomline: to decrypt environment variable you need to:\nknow your environment\u0026rsquo;s Id open symmetric key decrypted by a certificate - for this environment use DecryptByKey() function and cast variable to proper type close symmetric key (tidy up) The last thing: if you are sysadmin you can do it all without thinking about the permissions. But if you\u0026rsquo;re not - these are minimal privileges to decrypt data (tested on SQL Server 2014):\nVIEW DEFINITION on SYMMETRIC KEY CONTROL on CERTIFICATE SELECT on internal.environment_variables ",
"ref": "/2017/06/11/get-the-passwords-from-ssis-environment-variables/"
},{
"title": "What happens during SSIS deployments?",
"date": "",
"description": "",
"body": "When you deploy SSIS project basically you have two options - right click on project name and standalone tool (let\u0026rsquo;s skip SMO and stuff). Both mean the same: IsDeploymentWizard.exe. I was curious what happens during deployment and why mode/Silent finishes deployment very quickly, so I started digging.\nFirst I prepared sample SSIS project. Nothing extraordinary - just 5 packages, 6 project parameters and no connection managers (who needs them anyway?). Each package contained empty Data Flow Task - so you see that all just to compile something more than a single package. Later during tests, I added connection managers, but they were treated the same as project parameters, so I skipped them (the same with package parameters). To observe the environment I used good old SQL Profiler and Process Explorer.\nFirst - the SQL Profiler. I prepared only two operations to capture:\nRPC:Completed SQL:BatchCompleted Didn\u0026rsquo;t set up the columns, just accepted the defaults and hit Run. On a busy machine I could use a filter on Application Name column (Like: SSIS% - two applications are involved). As the result I got 27 lines of T-SQL commands\nWhat do we have here? First ten lines are five commands repeated two times. They check for the server version and settings. At first they are called when deployment wizard starts and connects to the SSISDB to check if everything is OK. The second time it starts the whole deployment procedure. The lines are:\n-- ========= 1 ========= -- Check for server environment -- ===================== DECLARE @edition sysname; SET @edition = cast(SERVERPROPERTY(N\u0026#39;EDITION\u0026#39;) as sysname); select case when @edition = N\u0026#39;SQL Azure\u0026#39; then 2 else 1 end as \u0026#39;DatabaseEngineType\u0026#39;; SELECT SERVERPROPERTY(\u0026#39;EngineEdition\u0026#39;) AS DatabaseEngineEdition if exists (select 1 from sys.all_objects where name = \u0026#39;dm_os_host_info\u0026#39; and type = \u0026#39;V\u0026#39; and is_ms_shipped = 1) begin select host_platform from sys.dm_os_host_info end else select N\u0026#39;Windows\u0026#39; as host_platform go -- ========= 2 ========= -- Get server name -- ===================== select SERVERPROPERTY(N\u0026#39;servername\u0026#39;) go -- ========= 3 ========= -- Get SSIS Catalog information -- ===================== exec sp_executesql N\u0026#39; --Preparing to access the Catalog object DECLARE @t_catalogs TABLE ( Name sysname COLLATE SQL_Latin1_General_CP1_CI_AS, EncryptionAlgorithm nvarchar(256) COLLATE SQL_Latin1_General_CP1_CI_AS, SchemaVersion int, SchemaBuild nvarchar(256) COLLATE SQL_Latin1_General_CP1_CI_AS, OperationLogRetentionTime int, MaxProjectVersions int, OperationCleanupEnabled bit, VersionCleanupEnabled bit, ServerLoggingLevel int, ServerCustomizedLoggingLevel nvarchar(128)) IF DB_ID(\u0026#39;\u0026#39;SSISDB\u0026#39;\u0026#39;) IS NOT NULL BEGIN INSERT INTO @t_catalogs VALUES( \u0026#39;\u0026#39;SSISDB\u0026#39;\u0026#39;, (SELECT [property_value] FROM [SSISDB].[catalog].[catalog_properties] WHERE [property_name] = N\u0026#39;\u0026#39;ENCRYPTION_ALGORITHM\u0026#39;\u0026#39;), (SELECT CAST([property_value] AS INT) FROM [SSISDB].[catalog].[catalog_properties] WHERE [property_name] = N\u0026#39;\u0026#39;SCHEMA_VERSION\u0026#39;\u0026#39;), (SELECT [property_value] FROM [SSISDB].[catalog].[catalog_properties] WHERE [property_name] = N\u0026#39;\u0026#39;SCHEMA_BUILD\u0026#39;\u0026#39;), (SELECT CAST([property_value] AS INT) FROM [SSISDB].[catalog].[catalog_properties] WHERE [property_name] = N\u0026#39;\u0026#39;RETENTION_WINDOW\u0026#39;\u0026#39;), (SELECT CAST([property_value] AS INT) FROM [SSISDB].[catalog].[catalog_properties] WHERE [property_name] = N\u0026#39;\u0026#39;MAX_PROJECT_VERSIONS\u0026#39;\u0026#39;), (SELECT CAST([property_value] AS BIT) FROM [SSISDB].[catalog].[catalog_properties] WHERE [property_name] = N\u0026#39;\u0026#39;OPERATION_CLEANUP_ENABLED\u0026#39;\u0026#39;), (SELECT CAST([property_value] AS BIT) FROM [SSISDB].[catalog].[catalog_properties] WHERE [property_name] = N\u0026#39;\u0026#39;VERSION_CLEANUP_ENABLED\u0026#39;\u0026#39;), (SELECT CAST([property_value] AS INT) FROM [SSISDB].[catalog].[catalog_properties] WHERE [property_name] = N\u0026#39;\u0026#39;SERVER_LOGGING_LEVEL\u0026#39;\u0026#39;), (SELECT [property_value] FROM [SSISDB].[catalog].[catalog_properties] WHERE [property_name] = N\u0026#39;\u0026#39;SERVER_CUSTOMIZED_LOGGING_LEVEL\u0026#39;\u0026#39;) ) END SELECT \u0026#39;\u0026#39;IntegrationServices[@Name=\u0026#39;\u0026#39; + quotename(CAST(SERVERPROPERTY(N\u0026#39;\u0026#39;Servername\u0026#39;\u0026#39;) AS sysname),\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;) + \u0026#39;\u0026#39;]\u0026#39;\u0026#39; + \u0026#39;\u0026#39;/Catalog[@Name=\u0026#39;\u0026#39; + \u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39; + REPLACE((SELECT Name from @t_catalogs), \u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;, \u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;) + \u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39; + \u0026#39;\u0026#39;]\u0026#39;\u0026#39; AS [Urn], (SELECT Name from @t_catalogs) AS [Name], (SELECT EncryptionAlgorithm from @t_catalogs) AS [EncryptionAlgorithm], (SELECT SchemaVersion from @t_catalogs) AS [SchemaVersion], (SELECT SchemaBuild from @t_catalogs) AS [SchemaBuild], (SELECT OperationLogRetentionTime from @t_catalogs) AS [OperationLogRetentionTime], (SELECT MaxProjectVersions from @t_catalogs) AS [MaxProjectVersions], (SELECT OperationCleanupEnabled from @t_catalogs) AS [OperationCleanupEnabled], (SELECT VersionCleanupEnabled from @t_catalogs) AS [VersionCleanupEnabled], (SELECT ServerLoggingLevel from @t_catalogs) AS [ServerLoggingLevel], (SELECT ServerCustomizedLoggingLevel from @t_catalogs) AS [ServerCustomizedLoggingLevel] WHERE (CAST(SERVERPROPERTY(N\u0026#39;\u0026#39;Servername\u0026#39;\u0026#39;) AS sysname)=@_msparam_0)\u0026#39;,N\u0026#39;@_msparam_0 nvarchar(4000)\u0026#39;,@_msparam_0=N\u0026#39;WIN-LFVELR2F095\u0026#39; -- ========== 4 ========== -- Get terget folder information -- ======================= exec sp_executesql N\u0026#39; --Preparing to access the Catalog object DECLARE @t_catalogs TABLE ( Name sysname COLLATE SQL_Latin1_General_CP1_CI_AS, EncryptionAlgorithm nvarchar(256) COLLATE SQL_Latin1_General_CP1_CI_AS, SchemaVersion int, SchemaBuild nvarchar(256) COLLATE SQL_Latin1_General_CP1_CI_AS, OperationLogRetentionTime int, MaxProjectVersions int, OperationCleanupEnabled bit, VersionCleanupEnabled bit, ServerLoggingLevel int, ServerCustomizedLoggingLevel nvarchar(128)) IF DB_ID(\u0026#39;\u0026#39;SSISDB\u0026#39;\u0026#39;) IS NOT NULL BEGIN INSERT INTO @t_catalogs VALUES( \u0026#39;\u0026#39;SSISDB\u0026#39;\u0026#39;, (SELECT [property_value] FROM [SSISDB].[catalog].[catalog_properties] WHERE [property_name] = N\u0026#39;\u0026#39;ENCRYPTION_ALGORITHM\u0026#39;\u0026#39;), (SELECT CAST([property_value] AS INT) FROM [SSISDB].[catalog].[catalog_properties] WHERE [property_name] = N\u0026#39;\u0026#39;SCHEMA_VERSION\u0026#39;\u0026#39;), (SELECT [property_value] FROM [SSISDB].[catalog].[catalog_properties] WHERE [property_name] = N\u0026#39;\u0026#39;SCHEMA_BUILD\u0026#39;\u0026#39;), (SELECT CAST([property_value] AS INT) FROM [SSISDB].[catalog].[catalog_properties] WHERE [property_name] = N\u0026#39;\u0026#39;RETENTION_WINDOW\u0026#39;\u0026#39;), (SELECT CAST([property_value] AS INT) FROM [SSISDB].[catalog].[catalog_properties] WHERE [property_name] = N\u0026#39;\u0026#39;MAX_PROJECT_VERSIONS\u0026#39;\u0026#39;), (SELECT CAST([property_value] AS BIT) FROM [SSISDB].[catalog].[catalog_properties] WHERE [property_name] = N\u0026#39;\u0026#39;OPERATION_CLEANUP_ENABLED\u0026#39;\u0026#39;), (SELECT CAST([property_value] AS BIT) FROM [SSISDB].[catalog].[catalog_properties] WHERE [property_name] = N\u0026#39;\u0026#39;VERSION_CLEANUP_ENABLED\u0026#39;\u0026#39;), (SELECT CAST([property_value] AS INT) FROM [SSISDB].[catalog].[catalog_properties] WHERE [property_name] = N\u0026#39;\u0026#39;SERVER_LOGGING_LEVEL\u0026#39;\u0026#39;), (SELECT [property_value] FROM [SSISDB].[catalog].[catalog_properties] WHERE [property_name] = N\u0026#39;\u0026#39;SERVER_CUSTOMIZED_LOGGING_LEVEL\u0026#39;\u0026#39;) ) END SELECT \u0026#39;\u0026#39;IntegrationServices[@Name=\u0026#39;\u0026#39; + quotename(CAST(SERVERPROPERTY(N\u0026#39;\u0026#39;Servername\u0026#39;\u0026#39;) AS sysname),\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;) + \u0026#39;\u0026#39;]\u0026#39;\u0026#39; + \u0026#39;\u0026#39;/Catalog[@Name=\u0026#39;\u0026#39; + \u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39; + REPLACE((SELECT Name from @t_catalogs), \u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;, \u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;) + \u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39; + \u0026#39;\u0026#39;]\u0026#39;\u0026#39; + \u0026#39;\u0026#39;/CatalogFolder[@Name=\u0026#39;\u0026#39; + \u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39; + REPLACE(folders.[name], \u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;, \u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;) + \u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39; + \u0026#39;\u0026#39;]\u0026#39;\u0026#39; AS [Urn], folders.[folder_id] AS [FolderId], folders.[name] AS [Name], folders.[description] AS [Description], folders.[created_by_sid] AS [CreatedBySid], folders.[created_by_name] AS [CreatedByName], CAST (folders.[created_time] AS datetime) AS [CreatedDate] FROM [SSISDB].[catalog].[folders] AS folders WHERE ((SELECT Name from @t_catalogs)=@_msparam_0)and((CAST(SERVERPROPERTY(N\u0026#39;\u0026#39;Servername\u0026#39;\u0026#39;) AS sysname)=@_msparam_1))\u0026#39;,N\u0026#39;@_msparam_0 nvarchar(4000),@_msparam_1 nvarchar(4000)\u0026#39;,@_msparam_0=N\u0026#39;SSISDB\u0026#39;,@_msparam_1=N\u0026#39;WIN-LFVELR2F095\u0026#39; -- ========== 5 ========== -- Get target project information -- ======================= exec sp_executesql N\u0026#39; --Preparing to access the Catalog object DECLARE @t_catalogs TABLE ( Name sysname COLLATE SQL_Latin1_General_CP1_CI_AS, EncryptionAlgorithm nvarchar(256) COLLATE SQL_Latin1_General_CP1_CI_AS, SchemaVersion int, SchemaBuild nvarchar(256) COLLATE SQL_Latin1_General_CP1_CI_AS, OperationLogRetentionTime int, MaxProjectVersions int, OperationCleanupEnabled bit, VersionCleanupEnabled bit, ServerLoggingLevel int, ServerCustomizedLoggingLevel nvarchar(128)) IF DB_ID(\u0026#39;\u0026#39;SSISDB\u0026#39;\u0026#39;) IS NOT NULL BEGIN INSERT INTO @t_catalogs VALUES( \u0026#39;\u0026#39;SSISDB\u0026#39;\u0026#39;, (SELECT [property_value] FROM [SSISDB].[catalog].[catalog_properties] WHERE [property_name] = N\u0026#39;\u0026#39;ENCRYPTION_ALGORITHM\u0026#39;\u0026#39;), (SELECT CAST([property_value] AS INT) FROM [SSISDB].[catalog].[catalog_properties] WHERE [property_name] = N\u0026#39;\u0026#39;SCHEMA_VERSION\u0026#39;\u0026#39;), (SELECT [property_value] FROM [SSISDB].[catalog].[catalog_properties] WHERE [property_name] = N\u0026#39;\u0026#39;SCHEMA_BUILD\u0026#39;\u0026#39;), (SELECT CAST([property_value] AS INT) FROM [SSISDB].[catalog].[catalog_properties] WHERE [property_name] = N\u0026#39;\u0026#39;RETENTION_WINDOW\u0026#39;\u0026#39;), (SELECT CAST([property_value] AS INT) FROM [SSISDB].[catalog].[catalog_properties] WHERE [property_name] = N\u0026#39;\u0026#39;MAX_PROJECT_VERSIONS\u0026#39;\u0026#39;), (SELECT CAST([property_value] AS BIT) FROM [SSISDB].[catalog].[catalog_properties] WHERE [property_name] = N\u0026#39;\u0026#39;OPERATION_CLEANUP_ENABLED\u0026#39;\u0026#39;), (SELECT CAST([property_value] AS BIT) FROM [SSISDB].[catalog].[catalog_properties] WHERE [property_name] = N\u0026#39;\u0026#39;VERSION_CLEANUP_ENABLED\u0026#39;\u0026#39;), (SELECT CAST([property_value] AS INT) FROM [SSISDB].[catalog].[catalog_properties] WHERE [property_name] = N\u0026#39;\u0026#39;SERVER_LOGGING_LEVEL\u0026#39;\u0026#39;), (SELECT [property_value] FROM [SSISDB].[catalog].[catalog_properties] WHERE [property_name] = N\u0026#39;\u0026#39;SERVER_CUSTOMIZED_LOGGING_LEVEL\u0026#39;\u0026#39;) ) END SELECT \u0026#39;\u0026#39;IntegrationServices[@Name=\u0026#39;\u0026#39; + quotename(CAST(SERVERPROPERTY(N\u0026#39;\u0026#39;Servername\u0026#39;\u0026#39;) AS sysname),\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;) + \u0026#39;\u0026#39;]\u0026#39;\u0026#39; + \u0026#39;\u0026#39;/Catalog[@Name=\u0026#39;\u0026#39; + \u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39; + REPLACE((SELECT Name from @t_catalogs), \u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;, \u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;) + \u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39; + \u0026#39;\u0026#39;]\u0026#39;\u0026#39; + \u0026#39;\u0026#39;/CatalogFolder[@Name=\u0026#39;\u0026#39; + \u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39; + REPLACE(folders.[name], \u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;, \u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;) + \u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39; + \u0026#39;\u0026#39;]\u0026#39;\u0026#39; + \u0026#39;\u0026#39;/ProjectInfo[@Name=\u0026#39;\u0026#39; + \u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39; + REPLACE(projects.[name], \u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;, \u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;) + \u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39; + \u0026#39;\u0026#39;]\u0026#39;\u0026#39; AS [Urn], projects.[project_id] AS [ProjectId], projects.[folder_id] AS [FolderId], projects.[name] AS [Name], projects.[description] AS [Description], projects.[project_format_version] AS [ProjectFormatVersion], projects.[deployed_by_sid] AS [DeployedBySid], projects.[deployed_by_name] AS [DeployedByName], CAST (projects.[last_deployed_time] AS datetime) AS [LastDeployedTime], CAST (projects.[created_time] AS datetime) AS [CreatedTime], projects.[object_version_lsn] AS [ObjectVersionLsn], projects.[validation_status] AS [ValidationStatus], CAST (projects.[last_validation_time] AS datetime) AS [LastValidationTime] FROM [SSISDB].[catalog].[folders] AS folders INNER JOIN [SSISDB].[catalog].[projects] AS projects ON projects.[folder_id]=folders.[folder_id] WHERE (folders.[name]=@_msparam_0)and(((SELECT Name from @t_catalogs)=@_msparam_1)and((CAST(SERVERPROPERTY(N\u0026#39;\u0026#39;Servername\u0026#39;\u0026#39;) AS sysname)=@_msparam_2)))\u0026#39;,N\u0026#39;@_msparam_0 nvarchar(4000),@_msparam_1 nvarchar(4000),@_msparam_2 nvarchar(4000)\u0026#39;,@_msparam_0=N\u0026#39;Test\u0026#39;,@_msparam_1=N\u0026#39;SSISDB\u0026#39;,@_msparam_2=N\u0026#39;WIN-LFVELR2F095\u0026#39; When I run the steps 3, 4, 5 I got those results (pivoted for better readability):\nStep 3 Urn IntegrationServices[@Name=\u0026lsquo;WIN-LFVELR2F095\u0026rsquo;]/Catalog[@Name=\u0026lsquo;SSISDB\u0026rsquo;] Name SSISDB EncryptionAlgorithm AES_256 SchemaVersion 5 SchemaBuild 14.0.500.272 OperationLogRetentionTime 365 MaxProjectVersions 10 OperationCleanupEnabled 1 VersionCleanupEnabled 1 ServerLoggingLevel 1 ServerCustomizedLoggingLevel Step 4 Urn IntegrationServices[@Name=\u0026lsquo;WIN-LFVELR2F095\u0026rsquo;]/Catalog[@Name=\u0026lsquo;SSISDB\u0026rsquo;]/CatalogFolder[@Name=\u0026lsquo;Test\u0026rsquo;] FolderId 1 Name Test Description CreatedBySid 0x0105000000000005150000003418BD4E479DAA9DB7F7C36DF4010000 CreatedByName WIN-LFVELR2F095\\Administrator CreatedDate 2017-04-23 12:31:23.030 Step 5 Urn IntegrationServices[@Name=\u0026lsquo;WIN-LFVELR2F095\u0026rsquo;]/Catalog[@Name=\u0026lsquo;SSISDB\u0026rsquo;]/CatalogFolder[@Name=\u0026lsquo;Test\u0026rsquo;]/ProjectInfo[@Name=\u0026lsquo;TestSSIS\u0026rsquo;] ProjectId 1 FolderId 1 Name TestSSIS Description ProjectFormatVersion 1 DeployedBySid 0x0105000000000005150000003418BD4E479DAA9DB7F7C36DF4010000 DeployedByName WIN-LFVELR2F095\\Administrator LastDeployedTime 30.04.2017 01:30 CreatedTime 23.04.2017 12:32 ObjectVersionLsn 14 ValidationStatus N LastValidationTime NULL Excellent. What next? Until now the commands were issued by SSIS Deployment Wizard application. Next few lines come from SSIS ISServerExec. The first one is pretty simple, yet I have to think more about it:\nexec sp_executesql N\u0026#39;SELECT [operation_id] FROM [catalog].[operations] WHERE [operation_id] = @operation_id AND [status] = @status\u0026#39;,N\u0026#39;@operation_id bigint,@status int\u0026#39;,@operation_id=5,@status=2 The question is: how does the SSIS ISServerExec proces know the value of @operation_id? @status = 2 means it checks if process is running.\nTo get the answer we have to take a closer look at the profiler output. The entire deployment finishes in 7 seconds, so it\u0026rsquo;s hard to spot, but when I did I asked myself why I didn\u0026rsquo;t see it before?\nTo tell you the truth I saw the operations times just after analysing catalog.deploy_project procedure (see the line starting with declare @p4). It has a WHILE loop inside waiting for some output. But output coming from what?\nThe SSIS ISServerExec data in the profiler window (3 - 20:49:30) is shown before the deployment process start (2 - 20:49:28). Suddenly all becomes clear - catalog.deploy_project waits for ISServerExec to finish its job. It checks for deployment status each second and finishes when the status is set as not running (@status \u0026lt;\u0026gt; 2). So, to analyse the process I have to start with the line (2). @project_stream is cut for readability.\ndeclare @p4 bigint set @p4=default exec [SSISDB].[catalog].[deploy_project] @folder_name=N\u0026#39;Test\u0026#39;,@project_name=N\u0026#39;TestSSIS\u0026#39;,@project_stream=`0x504B03041400000008002E`\u0026lt;...\u0026gt;,@operation_id=@p4 output select @p4 Afterwards I noted that if I used RPC:Starting event I would spot it earlier - the starting event is before start of ISServerExec.\nThe main part of the deployment has started and now we are on SSIS ISServerExec side. The next line gets project binary data provided by catalog.deploy_project .\nexec [internal].[get_project_internal] @project_version_lsn=2,@project_id=1,@project_name=N\u0026#39;TestSSIS\u0026#39; As a result, we get our project stream (0x504B03041400000008002E...) What\u0026rsquo;s interesting - this command is run by another user - the NTUserName column in Profiler shows SID S-1-9-3... We can check what user is this using simple conversion, but I was lazy and just used the script I found on the internet. The SID points to AllSchemaUser user in SSISDB database. And when we take a look into internal.get_project_internal procedure we see, that it\u0026rsquo;s run in a context of AllSchemaUser.\nThe next commands insert package data into internal.packages table using internal.append_packages procedure and custom table type internal.PackageTableType. The information comes from the project stream. As the .ispac is just a zip file with different extension the SSIS ISServerExec unpacks it and gets the content.\ndeclare @p3 internal.PackageTableType insert into @p3 values(N\u0026#39;Package01.dtsx\u0026#39;,\u0026#39;EB3054D0-31D0-4476-A922-EC9CE30789DB\u0026#39;,N\u0026#39;\u0026#39;,8,1,0,2,N\u0026#39;\u0026#39;,\u0026#39;33FD504B-57AB-4659-A4D5-F1336537A3A6\u0026#39;,1,N\u0026#39;N\u0026#39;,NULL,NULL) insert into @p3 values(N\u0026#39;Package05.dtsx\u0026#39;,\u0026#39;412802DC-3D72-43E3-9D7D-BF731685676A\u0026#39;,N\u0026#39;\u0026#39;,8,1,0,3,N\u0026#39;\u0026#39;,\u0026#39;29869950-CA90-4D54-BA4E-1AFD2DE6794E\u0026#39;,1,N\u0026#39;N\u0026#39;,NULL,NULL) insert into @p3 values(N\u0026#39;Package04.dtsx\u0026#39;,\u0026#39;C57A813A-32CB-485D-9B34-8AFA5797B825\u0026#39;,N\u0026#39;\u0026#39;,8,1,0,3,N\u0026#39;\u0026#39;,\u0026#39;7B4935B4-C59E-4F89-80C1-8D9C17C3E5E3\u0026#39;,1,N\u0026#39;N\u0026#39;,NULL,NULL) insert into @p3 values(N\u0026#39;Package03.dtsx\u0026#39;,\u0026#39;149A4EFF-05A6-4F3E-AB85-F7C07D271B37\u0026#39;,N\u0026#39;\u0026#39;,8,1,0,3,N\u0026#39;\u0026#39;,\u0026#39;2880FA80-E65A-4153-8EE3-BDB71A35FF60\u0026#39;,1,N\u0026#39;N\u0026#39;,NULL,NULL) insert into @p3 values(N\u0026#39;Package02.dtsx\u0026#39;,\u0026#39;A2D88A81-51B7-433B-9B11-696478AC0594\u0026#39;,N\u0026#39;\u0026#39;,8,1,0,3,N\u0026#39;\u0026#39;,\u0026#39;4B68DACB-EE94-4580-95EF-4FF62BFCF121\u0026#39;,1,N\u0026#39;N\u0026#39;,NULL,NULL) exec [internal].[append_packages] @project_id=1,@object_version_lsn=2,@packages_data=@p3 Then we add all parameters with internal.append_parameter procedure. We have six project parameters (and zero connection managers, zero package parameters) so we call it six times. Each procedure execution is made in another database call, so if we have a lot of parameters then we do a lot of single database queries. If we had connection managers and package parameters they would be also added with internal.append_parameter. With a small remark - each part of the connection string is treated as a different parameter.\nAll the information go to the internal.object_parameters table.\nexec [internal].[append_parameter] @project_id=1,@object_version_lsn=2,@object_name=N\u0026#39;TestSSIS\u0026#39;,@object_type=20,@parameter_name=N\u0026#39;Parameter1\u0026#39;,@parameter_data_type=N\u0026#39;Int32\u0026#39;,@required=0,@sensitive=0,@description=N\u0026#39;\u0026#39;,@design_default_value=-1,@value_set=0 go exec [internal].[append_parameter] @project_id=1,@object_version_lsn=2,@object_name=N\u0026#39;TestSSIS\u0026#39;,@object_type=20,@parameter_name=N\u0026#39;Parameter2\u0026#39;,@parameter_data_type=N\u0026#39;String\u0026#39;,@required=0,@sensitive=0,@description=N\u0026#39;\u0026#39;,@design_default_value=N\u0026#39;\u0026#39;,@value_set=0 go exec [internal].[append_parameter] @project_id=1,@object_version_lsn=2,@object_name=N\u0026#39;TestSSIS\u0026#39;,@object_type=20,@parameter_name=N\u0026#39;Parameter3\u0026#39;,@parameter_data_type=N\u0026#39;UInt32\u0026#39;,@required=0,@sensitive=0,@description=N\u0026#39;\u0026#39;,@design_default_value=333,@value_set=0 go exec [internal].[append_parameter] @project_id=1,@object_version_lsn=2,@object_name=N\u0026#39;TestSSIS\u0026#39;,@object_type=20,@parameter_name=N\u0026#39;Parameter4\u0026#39;,@parameter_data_type=N\u0026#39;Boolean\u0026#39;,@required=0,@sensitive=0,@description=N\u0026#39;\u0026#39;,@design_default_value=0,@value_set=0 go exec [internal].[append_parameter] @project_id=1,@object_version_lsn=2,@object_name=N\u0026#39;TestSSIS\u0026#39;,@object_type=20,@parameter_name=N\u0026#39;Parameter5\u0026#39;,@parameter_data_type=N\u0026#39;Single\u0026#39;,@required=0,@sensitive=0,@description=N\u0026#39;\u0026#39;,@design_default_value=5,@value_set=0 go exec [internal].[append_parameter] @project_id=1,@object_version_lsn=2,@object_name=N\u0026#39;TestSSIS\u0026#39;,@object_type=20,@parameter_name=N\u0026#39;Parameter6\u0026#39;,@parameter_data_type=N\u0026#39;Int64\u0026#39;,@required=0,@sensitive=0,@description=N\u0026#39;\u0026#39;,@design_default_value=666666,@value_set=0 go Aside sample of sample ADO.NET connection manager parametrization:\nOK. Parameters set up. Now a piece of code I don\u0026rsquo;t fully understand. It runs a synchronisation of parameters between latest project version and currently deployed version:\nexec [internal].[sync_parameter_versions] @project_id=1,@object_version_lsn=2 What I don\u0026rsquo;t get is why do we sync current version with the latest version, when the current version is already the latest? I don\u0026rsquo;t think it has something with concurrent deployments as the procedures start transactions in SERIALIZABLE isolation level. Will have to investigate it further.\nAlmost the end. The last command run as SSIS ISServerExec application is updating the deployment status as a great success (@status = 7). Looks like nothing fancy, just simple update, but there\u0026rsquo;s a bit of logic there, including cleanup in case of failed deployment. This procedure is also run as AllSchemaUser.\nexec [internal].[update_project_deployment_status] @status=7,@end_time=\u0026#39;2017-04-26 20:49:30.8732399 +02:00\u0026#39;,@operation_id=5,@project_version_lsn=2,@description=N\u0026#39;\u0026#39;,@project_format_version=1 When project deployment finishes SSIS Deployment Wizard takes control back. It fires four dynamic SQL statements. First two are identical - they check for deployment operation information (I don;t know why the same code run twice), the third checks for projects in a folder and then iterates through each project to get its data. I have one project in my folder, so it is just one SQL statement. If I had more projects I would get a separate statement for each of them (checked with another project).\n-- ======================= -- Get operation details -- ======================= exec sp_executesql N\u0026#39; --Preparing to access the Catalog object DECLARE @t_catalogs TABLE ( Name sysname COLLATE SQL_Latin1_General_CP1_CI_AS, EncryptionAlgorithm nvarchar(256) COLLATE SQL_Latin1_General_CP1_CI_AS, SchemaVersion int, SchemaBuild nvarchar(256) COLLATE SQL_Latin1_General_CP1_CI_AS, OperationLogRetentionTime int, MaxProjectVersions int, OperationCleanupEnabled bit, VersionCleanupEnabled bit, ServerLoggingLevel int, ServerCustomizedLoggingLevel nvarchar(128)) IF DB_ID(\u0026#39;\u0026#39;SSISDB\u0026#39;\u0026#39;) IS NOT NULL BEGIN INSERT INTO @t_catalogs VALUES( \u0026#39;\u0026#39;SSISDB\u0026#39;\u0026#39;, (SELECT [property_value] FROM [SSISDB].[catalog].[catalog_properties] WHERE [property_name] = N\u0026#39;\u0026#39;ENCRYPTION_ALGORITHM\u0026#39;\u0026#39;), (SELECT CAST([property_value] AS INT) FROM [SSISDB].[catalog].[catalog_properties] WHERE [property_name] = N\u0026#39;\u0026#39;SCHEMA_VERSION\u0026#39;\u0026#39;), (SELECT [property_value] FROM [SSISDB].[catalog].[catalog_properties] WHERE [property_name] = N\u0026#39;\u0026#39;SCHEMA_BUILD\u0026#39;\u0026#39;), (SELECT CAST([property_value] AS INT) FROM [SSISDB].[catalog].[catalog_properties] WHERE [property_name] = N\u0026#39;\u0026#39;RETENTION_WINDOW\u0026#39;\u0026#39;), (SELECT CAST([property_value] AS INT) FROM [SSISDB].[catalog].[catalog_properties] WHERE [property_name] = N\u0026#39;\u0026#39;MAX_PROJECT_VERSIONS\u0026#39;\u0026#39;), (SELECT CAST([property_value] AS BIT) FROM [SSISDB].[catalog].[catalog_properties] WHERE [property_name] = N\u0026#39;\u0026#39;OPERATION_CLEANUP_ENABLED\u0026#39;\u0026#39;), (SELECT CAST([property_value] AS BIT) FROM [SSISDB].[catalog].[catalog_properties] WHERE [property_name] = N\u0026#39;\u0026#39;VERSION_CLEANUP_ENABLED\u0026#39;\u0026#39;), (SELECT CAST([property_value] AS INT) FROM [SSISDB].[catalog].[catalog_properties] WHERE [property_name] = N\u0026#39;\u0026#39;SERVER_LOGGING_LEVEL\u0026#39;\u0026#39;), (SELECT [property_value] FROM [SSISDB].[catalog].[catalog_properties] WHERE [property_name] = N\u0026#39;\u0026#39;SERVER_CUSTOMIZED_LOGGING_LEVEL\u0026#39;\u0026#39;) ) END SELECT \u0026#39;\u0026#39;IntegrationServices[@Name=\u0026#39;\u0026#39; + quotename(CAST(SERVERPROPERTY(N\u0026#39;\u0026#39;Servername\u0026#39;\u0026#39;) AS sysname),\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;) + \u0026#39;\u0026#39;]\u0026#39;\u0026#39; + \u0026#39;\u0026#39;/Catalog[@Name=\u0026#39;\u0026#39; + \u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39; + REPLACE((SELECT Name from @t_catalogs), \u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;, \u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;) + \u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39; + \u0026#39;\u0026#39;]\u0026#39;\u0026#39; + \u0026#39;\u0026#39;/Operation[@Id=\u0026#39;\u0026#39; + \u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39; + REPLACE(ops.[operation_id], \u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;, \u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;) + \u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39; + \u0026#39;\u0026#39;]\u0026#39;\u0026#39; AS [Urn], ops.[operation_id] AS [Id], ops.[operation_type] AS [OperationType], CAST (ops.[created_time] AS datetime) AS [CreatedTime], ops.[object_type] AS [ObjectType], ops.[object_id] AS [ObjectId], ops.[object_name] AS [ObjectName], ops.[status] AS [Status], CAST (ops.[start_time] AS datetime) AS [StartTime], CAST (ops.[end_time] AS datetime) AS [EndTime], ops.[caller_sid] AS [CallerSid], ops.[caller_name] AS [CallerName], ops.[process_id] AS [ProcessId], ops.[stopped_by_sid] AS [StoppedBySid], ops.[stopped_by_name] AS [StoppedByName] FROM [SSISDB].[catalog].[operations] AS ops WHERE (ops.[operation_id]=@_msparam_0)and(((SELECT Name from @t_catalogs)=@_msparam_1)and((CAST(SERVERPROPERTY(N\u0026#39;\u0026#39;Servername\u0026#39;\u0026#39;) AS sysname)=@_msparam_2)))\u0026#39;,N\u0026#39;@_msparam_0 nvarchar(4000),@_msparam_1 nvarchar(4000),@_msparam_2 nvarchar(4000)\u0026#39;,@_msparam_0=N\u0026#39;5\u0026#39;,@_msparam_1=N\u0026#39;SSISDB\u0026#39;,@_msparam_2=N\u0026#39;WIN-LFVELR2F095\u0026#39; -- ======================= -- Get operation details (again) -- ======================= exec sp_executesql N\u0026#39; --Preparing to access the Catalog object DECLARE @t_catalogs TABLE ( Name sysname COLLATE SQL_Latin1_General_CP1_CI_AS, EncryptionAlgorithm nvarchar(256) COLLATE SQL_Latin1_General_CP1_CI_AS, SchemaVersion int, SchemaBuild nvarchar(256) COLLATE SQL_Latin1_General_CP1_CI_AS, OperationLogRetentionTime int, MaxProjectVersions int, OperationCleanupEnabled bit, VersionCleanupEnabled bit, ServerLoggingLevel int, ServerCustomizedLoggingLevel nvarchar(128)) IF DB_ID(\u0026#39;\u0026#39;SSISDB\u0026#39;\u0026#39;) IS NOT NULL BEGIN INSERT INTO @t_catalogs VALUES( \u0026#39;\u0026#39;SSISDB\u0026#39;\u0026#39;, (SELECT [property_value] FROM [SSISDB].[catalog].[catalog_properties] WHERE [property_name] = N\u0026#39;\u0026#39;ENCRYPTION_ALGORITHM\u0026#39;\u0026#39;), (SELECT CAST([property_value] AS INT) FROM [SSISDB].[catalog].[catalog_properties] WHERE [property_name] = N\u0026#39;\u0026#39;SCHEMA_VERSION\u0026#39;\u0026#39;), (SELECT [property_value] FROM [SSISDB].[catalog].[catalog_properties] WHERE [property_name] = N\u0026#39;\u0026#39;SCHEMA_BUILD\u0026#39;\u0026#39;), (SELECT CAST([property_value] AS INT) FROM [SSISDB].[catalog].[catalog_properties] WHERE [property_name] = N\u0026#39;\u0026#39;RETENTION_WINDOW\u0026#39;\u0026#39;), (SELECT CAST([property_value] AS INT) FROM [SSISDB].[catalog].[catalog_properties] WHERE [property_name] = N\u0026#39;\u0026#39;MAX_PROJECT_VERSIONS\u0026#39;\u0026#39;), (SELECT CAST([property_value] AS BIT) FROM [SSISDB].[catalog].[catalog_properties] WHERE [property_name] = N\u0026#39;\u0026#39;OPERATION_CLEANUP_ENABLED\u0026#39;\u0026#39;), (SELECT CAST([property_value] AS BIT) FROM [SSISDB].[catalog].[catalog_properties] WHERE [property_name] = N\u0026#39;\u0026#39;VERSION_CLEANUP_ENABLED\u0026#39;\u0026#39;), (SELECT CAST([property_value] AS INT) FROM [SSISDB].[catalog].[catalog_properties] WHERE [property_name] = N\u0026#39;\u0026#39;SERVER_LOGGING_LEVEL\u0026#39;\u0026#39;), (SELECT [property_value] FROM [SSISDB].[catalog].[catalog_properties] WHERE [property_name] = N\u0026#39;\u0026#39;SERVER_CUSTOMIZED_LOGGING_LEVEL\u0026#39;\u0026#39;) ) END SELECT \u0026#39;\u0026#39;IntegrationServices[@Name=\u0026#39;\u0026#39; + quotename(CAST(SERVERPROPERTY(N\u0026#39;\u0026#39;Servername\u0026#39;\u0026#39;) AS sysname),\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;) + \u0026#39;\u0026#39;]\u0026#39;\u0026#39; + \u0026#39;\u0026#39;/Catalog[@Name=\u0026#39;\u0026#39; + \u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39; + REPLACE((SELECT Name from @t_catalogs), \u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;, \u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;) + \u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39; + \u0026#39;\u0026#39;]\u0026#39;\u0026#39; + \u0026#39;\u0026#39;/Operation[@Id=\u0026#39;\u0026#39; + \u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39; + REPLACE(ops.[operation_id], \u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;, \u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;) + \u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39; + \u0026#39;\u0026#39;]\u0026#39;\u0026#39; AS [Urn], ops.[operation_id] AS [Id], ops.[operation_type] AS [OperationType], CAST (ops.[created_time] AS datetime) AS [CreatedTime], ops.[object_type] AS [ObjectType], ops.[object_id] AS [ObjectId], ops.[object_name] AS [ObjectName], ops.[status] AS [Status], CAST (ops.[start_time] AS datetime) AS [StartTime], CAST (ops.[end_time] AS datetime) AS [EndTime], ops.[caller_sid] AS [CallerSid], ops.[caller_name] AS [CallerName], ops.[process_id] AS [ProcessId], ops.[stopped_by_sid] AS [StoppedBySid], ops.[stopped_by_name] AS [StoppedByName] FROM [SSISDB].[catalog].[operations] AS ops WHERE (ops.[operation_id]=@_msparam_0)and(((SELECT Name from @t_catalogs)=@_msparam_1)and((CAST(SERVERPROPERTY(N\u0026#39;\u0026#39;Servername\u0026#39;\u0026#39;) AS sysname)=@_msparam_2)))\u0026#39;,N\u0026#39;@_msparam_0 nvarchar(4000),@_msparam_1 nvarchar(4000),@_msparam_2 nvarchar(4000)\u0026#39;,@_msparam_0=N\u0026#39;5\u0026#39;,@_msparam_1=N\u0026#39;SSISDB\u0026#39;,@_msparam_2=N\u0026#39;WIN-LFVELR2F095\u0026#39; -- ======================= -- Get all projects from the folder -- ======================= exec sp_executesql N\u0026#39; --Preparing to access the Catalog object DECLARE @t_catalogs TABLE ( Name sysname COLLATE SQL_Latin1_General_CP1_CI_AS, EncryptionAlgorithm nvarchar(256) COLLATE SQL_Latin1_General_CP1_CI_AS, SchemaVersion int, SchemaBuild nvarchar(256) COLLATE SQL_Latin1_General_CP1_CI_AS, OperationLogRetentionTime int, MaxProjectVersions int, OperationCleanupEnabled bit, VersionCleanupEnabled bit, ServerLoggingLevel int, ServerCustomizedLoggingLevel nvarchar(128)) IF DB_ID(\u0026#39;\u0026#39;SSISDB\u0026#39;\u0026#39;) IS NOT NULL BEGIN INSERT INTO @t_catalogs VALUES( \u0026#39;\u0026#39;SSISDB\u0026#39;\u0026#39;, (SELECT [property_value] FROM [SSISDB].[catalog].[catalog_properties] WHERE [property_name] = N\u0026#39;\u0026#39;ENCRYPTION_ALGORITHM\u0026#39;\u0026#39;), (SELECT CAST([property_value] AS INT) FROM [SSISDB].[catalog].[catalog_properties] WHERE [property_name] = N\u0026#39;\u0026#39;SCHEMA_VERSION\u0026#39;\u0026#39;), (SELECT [property_value] FROM [SSISDB].[catalog].[catalog_properties] WHERE [property_name] = N\u0026#39;\u0026#39;SCHEMA_BUILD\u0026#39;\u0026#39;), (SELECT CAST([property_value] AS INT) FROM [SSISDB].[catalog].[catalog_properties] WHERE [property_name] = N\u0026#39;\u0026#39;RETENTION_WINDOW\u0026#39;\u0026#39;), (SELECT CAST([property_value] AS INT) FROM [SSISDB].[catalog].[catalog_properties] WHERE [property_name] = N\u0026#39;\u0026#39;MAX_PROJECT_VERSIONS\u0026#39;\u0026#39;), (SELECT CAST([property_value] AS BIT) FROM [SSISDB].[catalog].[catalog_properties] WHERE [property_name] = N\u0026#39;\u0026#39;OPERATION_CLEANUP_ENABLED\u0026#39;\u0026#39;), (SELECT CAST([property_value] AS BIT) FROM [SSISDB].[catalog].[catalog_properties] WHERE [property_name] = N\u0026#39;\u0026#39;VERSION_CLEANUP_ENABLED\u0026#39;\u0026#39;), (SELECT CAST([property_value] AS INT) FROM [SSISDB].[catalog].[catalog_properties] WHERE [property_name] = N\u0026#39;\u0026#39;SERVER_LOGGING_LEVEL\u0026#39;\u0026#39;), (SELECT [property_value] FROM [SSISDB].[catalog].[catalog_properties] WHERE [property_name] = N\u0026#39;\u0026#39;SERVER_CUSTOMIZED_LOGGING_LEVEL\u0026#39;\u0026#39;) ) END SELECT \u0026#39;\u0026#39;IntegrationServices[@Name=\u0026#39;\u0026#39; + quotename(CAST(SERVERPROPERTY(N\u0026#39;\u0026#39;Servername\u0026#39;\u0026#39;) AS sysname),\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;) + \u0026#39;\u0026#39;]\u0026#39;\u0026#39; + \u0026#39;\u0026#39;/Catalog[@Name=\u0026#39;\u0026#39; + \u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39; + REPLACE((SELECT Name from @t_catalogs), \u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;, \u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;) + \u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39; + \u0026#39;\u0026#39;]\u0026#39;\u0026#39; + \u0026#39;\u0026#39;/CatalogFolder[@Name=\u0026#39;\u0026#39; + \u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39; + REPLACE(folders.[name], \u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;, \u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;) + \u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39; + \u0026#39;\u0026#39;]\u0026#39;\u0026#39; + \u0026#39;\u0026#39;/ProjectInfo[@Name=\u0026#39;\u0026#39; + \u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39; + REPLACE(projects.[name], \u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;, \u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;) + \u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39; + \u0026#39;\u0026#39;]\u0026#39;\u0026#39; AS [Urn], projects.[project_id] AS [ProjectId], projects.[folder_id] AS [FolderId], projects.[name] AS [Name], projects.[description] AS [Description], projects.[project_format_version] AS [ProjectFormatVersion], projects.[deployed_by_sid] AS [DeployedBySid], projects.[deployed_by_name] AS [DeployedByName], CAST (projects.[last_deployed_time] AS datetime) AS [LastDeployedTime], CAST (projects.[created_time] AS datetime) AS [CreatedTime], projects.[object_version_lsn] AS [ObjectVersionLsn], projects.[validation_status] AS [ValidationStatus], CAST (projects.[last_validation_time] AS datetime) AS [LastValidationTime] FROM [SSISDB].[catalog].[folders] AS folders INNER JOIN [SSISDB].[catalog].[projects] AS projects ON projects.[folder_id]=folders.[folder_id] WHERE (folders.[name]=@_msparam_0)and(((SELECT Name from @t_catalogs)=@_msparam_1)and((CAST(SERVERPROPERTY(N\u0026#39;\u0026#39;Servername\u0026#39;\u0026#39;) AS sysname)=@_msparam_2)))\u0026#39;,N\u0026#39;@_msparam_0 nvarchar(4000),@_msparam_1 nvarchar(4000),@_msparam_2 nvarchar(4000)\u0026#39;,@_msparam_0=N\u0026#39;Test\u0026#39;,@_msparam_1=N\u0026#39;SSISDB\u0026#39;,@_msparam_2=N\u0026#39;WIN-LFVELR2F095\u0026#39; -- ======================= -- Get project details -- ======================= exec sp_executesql N\u0026#39; --Preparing to access the Catalog object DECLARE @t_catalogs TABLE ( Name sysname COLLATE SQL_Latin1_General_CP1_CI_AS, EncryptionAlgorithm nvarchar(256) COLLATE SQL_Latin1_General_CP1_CI_AS, SchemaVersion int, SchemaBuild nvarchar(256) COLLATE SQL_Latin1_General_CP1_CI_AS, OperationLogRetentionTime int, MaxProjectVersions int, OperationCleanupEnabled bit, VersionCleanupEnabled bit, ServerLoggingLevel int, ServerCustomizedLoggingLevel nvarchar(128)) IF DB_ID(\u0026#39;\u0026#39;SSISDB\u0026#39;\u0026#39;) IS NOT NULL BEGIN INSERT INTO @t_catalogs VALUES( \u0026#39;\u0026#39;SSISDB\u0026#39;\u0026#39;, (SELECT [property_value] FROM [SSISDB].[catalog].[catalog_properties] WHERE [property_name] = N\u0026#39;\u0026#39;ENCRYPTION_ALGORITHM\u0026#39;\u0026#39;), (SELECT CAST([property_value] AS INT) FROM [SSISDB].[catalog].[catalog_properties] WHERE [property_name] = N\u0026#39;\u0026#39;SCHEMA_VERSION\u0026#39;\u0026#39;), (SELECT [property_value] FROM [SSISDB].[catalog].[catalog_properties] WHERE [property_name] = N\u0026#39;\u0026#39;SCHEMA_BUILD\u0026#39;\u0026#39;), (SELECT CAST([property_value] AS INT) FROM [SSISDB].[catalog].[catalog_properties] WHERE [property_name] = N\u0026#39;\u0026#39;RETENTION_WINDOW\u0026#39;\u0026#39;), (SELECT CAST([property_value] AS INT) FROM [SSISDB].[catalog].[catalog_properties] WHERE [property_name] = N\u0026#39;\u0026#39;MAX_PROJECT_VERSIONS\u0026#39;\u0026#39;), (SELECT CAST([property_value] AS BIT) FROM [SSISDB].[catalog].[catalog_properties] WHERE [property_name] = N\u0026#39;\u0026#39;OPERATION_CLEANUP_ENABLED\u0026#39;\u0026#39;), (SELECT CAST([property_value] AS BIT) FROM [SSISDB].[catalog].[catalog_properties] WHERE [property_name] = N\u0026#39;\u0026#39;VERSION_CLEANUP_ENABLED\u0026#39;\u0026#39;), (SELECT CAST([property_value] AS INT) FROM [SSISDB].[catalog].[catalog_properties] WHERE [property_name] = N\u0026#39;\u0026#39;SERVER_LOGGING_LEVEL\u0026#39;\u0026#39;), (SELECT [property_value] FROM [SSISDB].[catalog].[catalog_properties] WHERE [property_name] = N\u0026#39;\u0026#39;SERVER_CUSTOMIZED_LOGGING_LEVEL\u0026#39;\u0026#39;) ) END SELECT \u0026#39;\u0026#39;IntegrationServices[@Name=\u0026#39;\u0026#39; + quotename(CAST(SERVERPROPERTY(N\u0026#39;\u0026#39;Servername\u0026#39;\u0026#39;) AS sysname),\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;) + \u0026#39;\u0026#39;]\u0026#39;\u0026#39; + \u0026#39;\u0026#39;/Catalog[@Name=\u0026#39;\u0026#39; + \u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39; + REPLACE((SELECT Name from @t_catalogs), \u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;, \u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;) + \u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39; + \u0026#39;\u0026#39;]\u0026#39;\u0026#39; + \u0026#39;\u0026#39;/CatalogFolder[@Name=\u0026#39;\u0026#39; + \u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39; + REPLACE(folders.[name], \u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;, \u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;) + \u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39; + \u0026#39;\u0026#39;]\u0026#39;\u0026#39; + \u0026#39;\u0026#39;/ProjectInfo[@Name=\u0026#39;\u0026#39; + \u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39; + REPLACE(projects.[name], \u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;, \u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;) + \u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39;\u0026#39; + \u0026#39;\u0026#39;]\u0026#39;\u0026#39; AS [Urn], projects.[project_id] AS [ProjectId], projects.[folder_id] AS [FolderId], projects.[name] AS [Name], projects.[description] AS [Description], projects.[project_format_version] AS [ProjectFormatVersion], projects.[deployed_by_sid] AS [DeployedBySid], projects.[deployed_by_name] AS [DeployedByName], CAST (projects.[last_deployed_time] AS datetime) AS [LastDeployedTime], CAST (projects.[created_time] AS datetime) AS [CreatedTime], projects.[object_version_lsn] AS [ObjectVersionLsn], projects.[validation_status] AS [ValidationStatus], CAST (projects.[last_validation_time] AS datetime) AS [LastValidationTime] FROM [SSISDB].[catalog].[folders] AS folders INNER JOIN [SSISDB].[catalog].[projects] AS projects ON projects.[folder_id]=folders.[folder_id] WHERE (projects.[name]=@_msparam_0)and((folders.[name]=@_msparam_1)and(((SELECT Name from @t_catalogs)=@_msparam_2)and((CAST(SERVERPROPERTY(N\u0026#39;\u0026#39;Servername\u0026#39;\u0026#39;) AS sysname)=@_msparam_3))))\u0026#39;,N\u0026#39;@_msparam_0 nvarchar(4000),@_msparam_1 nvarchar(4000),@_msparam_2 nvarchar(4000),@_msparam_3 nvarchar(4000)\u0026#39;,@_msparam_0=N\u0026#39;TestSSIS\u0026#39;,@_msparam_1=N\u0026#39;Test\u0026#39;,@_msparam_2=N\u0026#39;SSISDB\u0026#39;,@_msparam_3=N\u0026#39;WIN-LFVELR2F095\u0026#39; And the results are:\nGet operation details Urn IntegrationServices[@Name=\u0026lsquo;WIN-LFVELR2F095\u0026rsquo;]/Catalog[@Name=\u0026lsquo;SSISDB\u0026rsquo;]/Operation[@Id=\u0026lsquo;5\u0026rsquo;] Id 5 OperationType 101 CreatedTime 26.04.2017 20:49 ObjectType 20 ObjectId 1 ObjectName TestSSIS Status 7 StartTime 26.04.2017 20:49 EndTime 26.04.2017 20:49 CallerSid 0x0105000000000005150000003418BD4E479DAA9DB7F7C36DF4010000 CallerName WIN-LFVELR2F095\\Administrator ProcessId 7144 StoppedBySid NULL StoppedByName NULL Get operation details (again) Urn IntegrationServices[@Name=\u0026lsquo;WIN-LFVELR2F095\u0026rsquo;]/Catalog[@Name=\u0026lsquo;SSISDB\u0026rsquo;]/Operation[@Id=\u0026lsquo;5\u0026rsquo;] Id 5 OperationType 101 CreatedTime 26.04.2017 20:49 ObjectType 20 ObjectId 1 ObjectName TestSSIS Status 7 StartTime 26.04.2017 20:49 EndTime 26.04.2017 20:49 CallerSid 0x0105000000000005150000003418BD4E479DAA9DB7F7C36DF4010000 CallerName WIN-LFVELR2F095\\Administrator ProcessId 7144 StoppedBySid NULL StoppedByName NULL Get all projects from the folder Urn IntegrationServices[@Name=\u0026lsquo;WIN-LFVELR2F095\u0026rsquo;]/Catalog[@Name=\u0026lsquo;SSISDB\u0026rsquo;]/CatalogFolder[@Name=\u0026lsquo;Test\u0026rsquo;]/ProjectInfo[@Name=\u0026lsquo;TestSSIS\u0026rsquo;] ProjectId 1 FolderId 1 Name TestSSIS Description ProjectFormatVersion 1 DeployedBySid 0x0105000000000005150000003418BD4E479DAA9DB7F7C36DF4010000 DeployedByName WIN-LFVELR2F095\\Administrator LastDeployedTime 03.05.2017 09:32 CreatedTime 23.04.2017 12:32 ObjectVersionLsn 25 ValidationStatus N LastValidationTime NULL Get project details Urn IntegrationServices[@Name=\u0026lsquo;WIN-LFVELR2F095\u0026rsquo;]/Catalog[@Name=\u0026lsquo;SSISDB\u0026rsquo;]/CatalogFolder[@Name=\u0026lsquo;Test\u0026rsquo;]/ProjectInfo[@Name=\u0026lsquo;TestSSIS\u0026rsquo;] ProjectId 1 FolderId 1 Name TestSSIS Description ProjectFormatVersion 1 DeployedBySid 0x0105000000000005150000003418BD4E479DAA9DB7F7C36DF4010000 DeployedByName WIN-LFVELR2F095\\Administrator LastDeployedTime 03.05.2017 09:32 CreatedTime 23.04.2017 12:32 ObjectVersionLsn 25 ValidationStatus N LastValidationTime NULL The last operation in deployment process is run by SSIS ISServerExec Crash Handler. It\u0026rsquo;s almost the same command as the previous by SSIS ISServerExec:\nexec [internal].[update_project_deployment_status] @status=4,@end_time=\u0026#39;2017-04-26 20:49:32.7510757 +02:00\u0026#39;,@operation_id=5,@project_version_lsn=2,@description=N\u0026#39;\u0026#39; The difference is it doesn\u0026rsquo;t contain @project_format_version parameter, sets different @status (4 = failed) and - of course - @end_time. And - it does nothing.\nWell, it would set deployment status as failed if there was something wrong with the process, but the internal.update_project_deployment_status has a condition - it runs only:\nIF EXISTS (SELECT [operation_id] FROM [internal].[operations] WHERE ([status] = 5 OR [status] = 2 OR [status] = 4) AND [operation_id] = @operation_id AND [operation_type] = 101) So if the process finished properly (@status = 7) nothing happens. I made the tests few times and sometimes te last step of ISServerExec process finished before the end of the catalog.deploy_project procedure.\nThe last thing to check is why IsDeploymentWizard finishes deployment so quickly and what happens behind the scenes.\nTo answer for the latter - it does the same steps as manual deployment. And how it does it so quickly? It just doesn\u0026rsquo;t wait for the outcome before returning to console. But it still runs it the background. Look at the cmd.exe process on the top and sqlservr.exe process at the bottom. Click the picture to see the details.\nAnd that\u0026rsquo;s all. So:\n/Silent mode just returns to the console right after the process start deployment involves two processes: IsDeploymentWizard and ISServerExec the IsDeploymentWizard waits until ISServerExec finishes its work checking the status within the WHILE loop on 1 second intervals if the proces finished with success the last operation by ISServerExec Crash handler does nothing. The communication between SSISDB and ISServerExec (and ISServerExec itself) is a great candidate for another post in near future.\n",
"ref": "/2017/05/03/what-happens-during-ssis-deployments/"
},{
"title": "About",
"date": "",
"description": "",
"body": " photo: Rodney Kidd, SQL Saturday Oslo 2019\nHi! My name is Bartosz and I\u0026rsquo;m from Poland. I was blogging from time to time in polish, but during SQLBits 2017 I got so much inspiration, met a lot of new friends (and stopped being afraid to speak my terrible english) that I started writing an english-language-based blog. The main goal was to level up my english laguage, the second to show what I have to offer to a broader audience.\nSince then I was brave enough to attend more conferences in english and even become a speaker!\nIf you want to contact me, tell how bad my english is, get in touch after conferences (or group meetings) or would like to be my proofreader - I will gladly accept all feedback: b (dot) ratajczyk (at) gmail (dot) com. Or just ping me on twitter.\nBack then, it was just a start. As Yoda said: \u0026ldquo;Do, or do not. There is no try\u0026rdquo;.\nDo.\n",
"ref": "/about/"
},{
"title": "SQLBits 2017",
"date": "",
"description": "",
"body": "This year I attended SQLBits for the first time. I wanted to go to the event last year, but didn\u0026rsquo;t manage, so this time it was a must. And I have to tell you - great event in all aspects.\nFirst - the content\rI wanted to learn something new, or look at the things from different angle. Touch something, break and repair. So I went for the workshop by Mark Broadbent (b|t) about high availability with clusters and Always On Availability Groups. I did play with clusters before (using VirtualBox), this time we used Hyper-V and Windows Core servers what was new for me. Building from the ground up, configuring, fighting with internet connection, rebuilding machines (well, this was not in the workshop plan - I managed to destroy the virtual machine myself) - it was what I expected.\nOn the second day I went to Matt Masson\u0026rsquo;s all-day presentation about SSIS. I\u0026rsquo;m working a lot with ETL packages so I was curious if there is a room to improve. And I was glad to find out we are using best practices already, but also to learn some new interesting patterns to use. One of them - cascade use of OLEDB Destinations with additional File Destination for error reporting or double use of Lookup transformations - one with Full cache, one with Partial cache. And you know what was surprising? The session had no demos at all. Only slides. And I was awake all the time!\nThe next two days were full day presentations and I changed my mind few times before finally decided which session to go. Tough choice! Each time I\u0026rsquo;ve learned something new and got some inspiration. Once I took the 400 level sessions that extended my knowledge of the subject, got to know about new things (like new query optimizer features) and sometimes I went for the things I know almost nothing about - like R language - or not interested that much - like new things in reporting. And I was not disappointed.\nIf I had to select one session, that made a biggest impact on me it would be Normalization beyond the third normal form by Hugo Kornellis (b|t). Very deep session with great examples about data design. I never thought about data the way that Hugo presented and it was awesome. I encourage everyone to watch it once it will be published. Not the easiest material, but worth your time, I promise.\nSecond - the people\rNext thing that gets me to the conferences is the people. I want to see again the folks I\u0026rsquo;ve met on previous conferences, get to know new friends, colleagues. All of them are great, willing to help if you have a problem. You want to talk about the issue you are currently facing in your job? Great, lets do it. You want more details about the session? You\u0026rsquo;re welcome. Party together? No problem! Want some beer? Here you go. Talks about presentations, work matters, problems solved for clients - everything is so enriching. Have you seen this hashtag #sqlfamily? It really is a big, extraordinary family.\nThird - the party\rIt\u0026rsquo;s not the main reason to go for the conference, but an excelent excuse to get to know new people by a glass of beer or something stronger. But it was not just a meeting with alcohol - pub games for people gathered by the round tables was \u0026ldquo;bulls eye\u0026rdquo; for me. Sometimes I didn\u0026rsquo;t understand the questions, some of them were too specific, but almost already someone clarified what was it about. And I was amazed by the things the people knew. Maiden name of Marge Simpson? Name all the girls from \u0026ldquo;Sex ant the city\u0026rdquo;? Know the name of the disco hit and the performer after just few seconds? Come on! The next day - party with \u0026lsquo;70 - \u0026lsquo;80 disco theme. Well, although I have not seen many people dancing - the party was very relaxing. And Redgate\u0026rsquo;s photo booth was under the siege.\nSQLDBA with a bear(d)\nFourth - the sponsors\rI always visit the sponsors. Of course, their gadgets are nice, but this time I had plenty of time to see their products. On another conferences I had about 15 minutes between sessions, so not that much time to get familiar with the product, This time was different. Longer breaks, regular sessions mixed with extended and suddenly I could see full demos and have a conversation about the products I was interested in. Also for the first time I took the folders to actually read it!\nFifth - the venue\rThe only thing that didn\u0026rsquo;t convince me were the Domes. Sometimes you could hear what is happening in another one. But the place in general was great Big rooms for the workshops, a lot of place for networking - what one could want more?\nThe takeaway\rJohann van den Brink asked me what got me inspired during SQLBits. Few things. First - the tools that sponsors shown me are really useful. I will have to try SQLClone (by Redgate) and compare it to containers approach with Docker for testing purposes. Also LegiTest (by Pragmatic Works) looks promising for testing SSIS packages. Second - definitely will start using more Powershell tools. Andre Kamman (b|t) demonstrated how easy is to build full test lab using few lines of code with Autolab. It will be a great simplification of the proces that we went thoroughly with Mark Broadbent during the workshop. It\u0026rsquo;s great to know how to build your cluster from scratch, but it\u0026rsquo;s also great to simplify the proces. Also some of the SSIS design patterns I saw on Matt\u0026rsquo;s presentation will start to use in development. What else? Some time ago I started to work more closely with testing, Hugo Kornelis inspired me to look closer at the design. So I would say it\u0026rsquo;s like back to basics, revisited.\nTo sum things up. I really ejoyed the whole conference. If possible, I will go again next year. I\u0026rsquo;m already sure I won\u0026rsquo;t be disappointed.\n",
"ref": "/2017/04/16/sqlbits-2017/"
},{
"title": "Speaking",
"date": "",
"description": "",
"body": "(work in progress)\rEvent Date Session 24.01.2020. First steps with SQL Server on Docker 28.09.2019. How does the recursive CTE work? 14.09.2019. First steps with SQL Server on Docker 31.08.2019. First steps with SQL Server on Docker 14.05.2019. SQL Server + Docker – pierwsze kroki 09.02.2019. Testuj swoje pakiety SSIS z ssisUnit 18.01.2019. Start testing your SSIS packages with ssisUnit 16.10.2018. Start testing your SSIS packages with ssisUnit 13.10.2018. Start testing your SSIS packages with ssisUnit 06.10.2018. Start testing your SSIS packages with ssisUnit 01.09.2018. Start testing your SSIS packages with ssisUnit 16.05.2018. Zacznij wreszcie testować swoje pakiety SSIS 07.10.2017. Automate your SSIS deployment process 16.05.2017. Zautomatyzuj swój proces wdrażania projektów SSIS Stickers\r",
"ref": "/speaking/"
}]