-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathindex.html
454 lines (448 loc) · 27 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
<!doctype html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0, maximum-scale=1.0, user-scalable=no">
<title>Have Your Cake and Synthesize it Too</title>
<link rel="stylesheet" href="dist/reset.css">
<link rel="stylesheet" href="dist/reveal.css">
<link rel="stylesheet" href="dist/theme/white.css">
<!-- Theme used for syntax highlighted code -->
<link rel="stylesheet" href="plugin/highlight/monokai.css">
<style>
.reveal .footer {
position: absolute;
bottom: 1em;
left: 1em;
font-size:0.5em;
}
.reveal .slides {
width: 65vw !important;
}
.reveal .controls {
font-size: 10px !important;
}
</style>
</head>
<body>
<div class="reveal">
<div class="footer">
<img src="img/conveningFooter.svg" alt="Footer banner for SDP standard slide deck">
</div>
<div class="slides">
<section>
<div style="margin-left: .5%; width: 20vw !important; margin-top: -5%; margin-bottom: 1px;">
<img src="img/CEPRlogo.svg" alt="Center for Education Policy Research Logo" style="margin-top: 5%; margin-bottom: 1px;">
</div>
<div style="width: 67.5vw; margin-top: -2%; margin-left: -9%;">
<img src="img/banner.jpeg" alt="Convening standard banner for intro slide" style="width: 100%; height: 150px; margin-left: 7.5%; margin-bottom: 0px;">
</div>
<h3>Have Your Cake & Synthesize it Too</h3>
<h4><a href="https://github.com/wbuchanan">Billy Buchanan, Ph.D.</a></h4>
<h5>Senior Research Scientist, <a href="https://www.sagcorp.com">SAG Corporation</a></h5>
<div style="margin-top: -2.5%; margin-bottom: -1.5%;">
<img src="img/saglogo.png" alt="SAG Corporation Logo" height="75px">
<img src="img/sdvosblogo.png" alt="Service Disabled Veteran Owned Small Business Emblem" height="75px">
</div>
<h5><a href="https://wbuchanan.github.io/sdpConvening2023" target="_blank">https://wbuchanan.github.io/sdpConvening2023</a></h5>
<aside class="notes">
<p>Disclaimer: This talk does not constitute an endorsement or position of SAG Corporation. It is solely and entirely the opinion of the speaker.</p>
</aside>
</section>
<!--
Use the next slide to explain the drawing on index cards
-->
<section>
<h3>Get Ready for a Game</h3>
<ul>
<li>Get a blank index card and something to draw with</li>
<li>Draw your best example of the object listed on the index card in the middle of the table (or in your group)</li>
<li>Once everyone has created their example collect all the examples and put the topic index card on top</li>
</ul>
</section>
<section>
<section data-autoslide="2500">
<h2>Key Objectives</h2>
</section>
<section>
<ol>
<li>Distinguish between user- and model-based synthesis</li>
<li>Gain familiarity with some current synthesis methods</li>
<li>Understand the applications of synthesis methods</li>
</ol>
<aside class="notes">
<p><b>Won't get to discuss model verification/testing/checking, but tools in all software mentioned/recommended.</b></p>
<ul>
<li>We'll only discuss a couple types of models that you can think of in two groups: shallow and deep learning methods</li>
<li>On the shallow side mention CART for general synthesis and SMOTE for small cell size imputation</li>
<li>On the deep side mention CPAR and CTGAN</li>
</ul>
</aside>
</section>
</section>
<section>
<section data-autoslide="3500">
<h2>But first, some questions about the problems we face...</h2>
</section>
<section data-autoslide="300000">
<h2>What are the biggest barriers you face to share data with external stakeholders?</h2>
<aside class="notes">
<p><b>Prompt for Privacy Issues</b></p>
<ul>
<li>Remember to include prompt related to privacy related challenges/FERPA</li>
<li>Could also prompt about uncertainty around the analyses of external partners</li>
<li>What about cell size suppression from the analyses?</li>
</ul>
</aside>
</section>
<section data-autoslide="300000">
<h2>How do you currently analyze data for your smallest demographic groups?</h2>
<aside class="notes">
<p><b>Prompt for small cell size and/or standard error issues</b></p>
<ul>
<li>Prompt for answers related to methods</li>
<li>Follow up could be how confident are you in analyses related to smaller demographic groups</li>
</ul>
</aside>
</section>
</section>
<section>
<section>
<h2>Synthetic Data: The Origin Stories</h2>
<aside class="notes">
<p><b>Imputation and Computer Vision in order</b></p>
<ul>
<li>Two different points of origin: imputation and computer vision</li>
<li>We'll talk about imputation first and comp vision second</li>
</ul>
</aside>
</section>
<section data-autoslide="2500">
<h2>In the beginning, there was imputation...</h2>
</section>
<section>
<ul>
<li>Statistical agencies want to share data broadly while protecting privacy</li>
<li>First references to synthetic data come from Statistical Disclosure Control</li>
<li>Initially, synthetic data are a form of multiply imputed data</li>
<li>Brought to you by the makers of Multiple Imputation: Donald Rubin</li>
</ul>
<aside class="notes">
<p><b>Need to balance accessibility with privacy concerns</b></p>
<ul>
<li>Problem: statistical agencies collect tons of data that can and should be used for research/analysis, but also need to protect the privacy of respondents.</li>
<li>Ask for a show of hands for who has heard of/used multiple or other forms of imputation?</li>
<li>Imputation started as a way to solve missing data problems</li>
<li>Rubin also figured that you could use imputations to create synthetic records that preserve the properties of the data as well</li>
</ul>
</aside>
</section>
<section data-autoslide="2500">
<h2>and then computer scientists came along...</h2>
</section>
<section>
<ul>
<li>Computer vision researchers find they need more image data</li>
<li>Early methods are based on lossy compression techniques</li>
<li>In 2014, Goodfellow and his colleagues develop Generative Adversarial Networks (GAN)</li>
<li>Now in addition to images, GANs are synthesizing tabular data</li>
</ul>
<aside class="notes">
<p><b>Models need lots of data, but lots of data is costly/difficult</b></p>
<ul>
<li>Problem: initially computer vision researchers used simple transformations of images, but found it didn't provide enough variance.</li>
<li>Sourcing real images is costly and time consuming, so they started to look for ways to mimick the properties of the underlying distribution of the data.</li>
<li>Lossy compression is a data compression technique where some information is lost, but the majority of the information is retained.</li>
<li>Variational Auto Encoders were a first attempt, but would often inject artifacts into the images that made them easily detectable</li>
<li>Generative Adversarial Networks quickly changed the game and made it possible to generate very realistic images.</li>
</ul>
</aside>
</section>
<section data-background-iframe="https://www.whichfaceisreal.com/index.php">
<aside class="notes">
<p><b>A quick game of identify the fake, will be similar to game later.</b></p>
<ul>
<li>Baked into each of these problems is a trade-off between utility and similarity, or privacy.</li>
<li>As the utility of the synthetic data are maximized, the similarity to the source data also increases. You'll hear about this trade off more as well.</li>
<li>Despite the two branches trying to solve different problems initially, there is now greater understanding of how data synthesis can solve both problems simultaneously.</li>
</ul>
</aside>
</section>
</section>
<section>
<section data-autoslide="3500">
<h2>What is synthetic data?</h2>
</section>
<section>
<ul>
<li>Data created by a computer</li>
<li>Can be created by: </li>
<ul>
<li>User instructions/specifications</li>
<li>Models fitted to observed data</li>
</ul>
<li>It is <b>not</b> synthetic control</li>
</ul>
<aside class="notes">
<p><b>Define and example of User Defined and Model Defined approaches.</b></p>
<ul>
<li>Call user instructions the "User Defined" approach</li>
<li>Can reference some of the simulation tools in OpenSDP as examples and explain that it isn't the focus of this discussion</li>
<li>Call models the "Model Defined" approach</li>
<li>Synthetic control is related to reweighting more than data synthesis.</li>
</ul>
</aside>
</section>
<section>
<h3>Model-Defined Approaches</h3>
<ul>
<li>Imputation</li>
<li><a href="https://www.synthpop.org.uk/get-started.html" target="_blank">Shallow Learning</a></li>
<li><a href="https://arxiv.org/pdf/1106.1813.pdf" target="_blank">SMOTE - Synthetic Minority Oversampling TEchnique</a></li>
<li><a href="https://docs.sdv.dev/sdv/" target="_blank">Deep Learning</a></li>
</ul>
<aside class="notes">
<p><b>Focus on Shallow/Deep Learning, Imputation Difficult/Costly, SMOTE alternatives</b></p>
<ul>
<li>Imputation is still used to generate synthetic data, particularly the Census Bureau. It is fairly labor intensive as it requires specifying a model for each variable that will be synthesized. It also requires end-users to have some familiarity with analyzing multiply imputed data. Importantly, it can also include ML modeling techniques.</li>
<li>Shallow learning methods have become the defacto in the statistical disclosure control world. It is similar to imputation because models are used for all variables, but requires less work because the models are determined algorithmically. The code example I included in the GitHub repository uses this approach and highlights some stuff to consider.</li>
<li>I gave SMOTE it's own call out because it has more of a niche application. SMOTE is really useful to increase the sample size of small cell groups and interpolates from the sample space of the observed data to fill in the gaps, so to speak.</li>
<li>Deep learning approaches provide some additional flexibility that doesn't directly exist in the other methods, albeit at the cost of increased computational time. While it is possible to use DL approaches with CPU only, the time it will take is likely to be exponentially longer compared to using GPU/TPU.</li>
<li>We'll talk a bit more about the shallow and deep learning methods. Some deep learning methods can also be used to solve the same problem SMOTE attempts to solve.</li>
</ul>
</aside>
</section>
<section>
<h3><a href="https://www.synthpop.org.uk/get-started.html" target="_blank">Shallow Learning</a></h3>
<ul>
<li>Most common/popular - Classification & Regression Trees (CART)</li>
<li>First variable is sampled from the observed data; the rest is predicted from models</li>
<li>CART is prone to overfitting</li>
<li>As the number of variables increases, performance can be worse</li>
<li>Requires reshaping longitudinal data to wide format</li>
</ul>
<aside class="notes">
<p><b>Good for cross-sectional data, not so great for longitudinal or specific groups.</b></p>
<ul>
<li>The heading on this slide is a link to the most popular software package for this type of data synthesis</li>
<li>The first variable in the visit sequence is sampled from the data, so make sure the first variable is not a student identifier and or specify the visit sequence to ensure the first variable isn't going to be one that will release PII.</li>
<li>Once the first variable is sampled, the model building begins. A model is built predicting the second variable to be synethesized from the first variable. The first "synthetic" variable is then passed to the model to predict the values of the second synthetic variable. This continues until you get to the last variable in the data set.</li>
<li>Make sure you specify model hyperparameters for the minimum cell size to stop at to help prevent overfitting.</li>
<li>May not perform well for longitudinal data due to a combination of error propagation and/or the process of modeling a single variable at a time.</li>
</ul>
</aside>
</section>
<section>
<img src="img/mlProcess.svg" alt="Diagram to illustrate how the shallow learning methods generate synthetic data.">
<aside class="notes">
<ul>
<li>Walk the audience through the diagram</li>
</ul>
</aside>
</section>
<section>
<h3><a href="https://docs.sdv.dev/sdv/" target="_blank">Deep Learning</a></h3>
<ul>
<li>Two primary - <a href="https://docs.sdv.dev/sdv/single-table-data/modeling/synthesizers/ctgansynthesizer" target="_blank">Conditional Tabular GAN (CTGAN)</a> and <a href="https://docs.sdv.dev/sdv/sequential-data/modeling/parsynthesizer" target="">Conditional Probabilistic AutoRegressive (CPAR)</a></li>
<li>CTGAN can be used like SMOTE</li>
<li>CPAR is designed for longitudinal data</li>
<li>Both are slow on CPU and benefit from GPU or TPU</li>
<li>No observed data is sampled into the synthetic data</li>
</ul>
<aside class="notes">
<p><b>Can provide alternative to SMOTE and better for longitudinal, but can take a while/use a lot of resources</b></p>
<ul>
<li>The heading on this slide is a link to the software package I would recommend for this type of data synthesis</li>
<li>All output from these models is completely synthetic.</li>
<li>CTGAN is conditional because instead of starting from completely random noise it starts with random noise and a feature/variable that also appears in the real data. So you might have an indicator for biological sex in the data and that would be passed to the generator to help it "learn" how to generate samples of males and females. The conditioning can be on multiple variables as well. This is why it can work as a replacement for SMOTE, since you can generate more samples for specific groups to increase sample size and balance in the dataset.</li>
<li>CTGAN allows you to provide information about the underlying architecture in terms of the number of layers and nodes in each layer. Giving the generator more capacity than the discriminator will help with convergence. Additionally, as you use more layers you may want to consider decreasing the number of nodes. CTGAN will be good for cross-sectional data.</li>
<li>CPAR is impressive to say the least. DL typically does not do well with missingness and DL for timeseries does not do well with irregular time intervals or unbalanced panels. CPAR is fully capable of synthesizing balanced panel data and unbalanced panel data. However, the computations under the hood are much slower due to the autoregressive nature of the architecture.</li>
</ul>
</aside>
</section>
<section>
<h3>Considerations</h3>
<ul>
<li>Privacy Protection vs Data Utility</li>
<li>Cross-Sectional vs Longitudinal Data</li>
<li>Time Sensitivity/Computational Resources</li>
<li>Validation Server</li>
</ul>
<aside class="notes">
<p><b>Address each bullet individually and provide recommendations.</b></p>
<ul>
<li>In short, the more utility the synthetic data retains, the lower the privacy protection. To see why, if you maximized the utility of data synthesized from a source it would reproduce the original source. Additionally, both synthpop and the synthetic data vault include tools to assess the utility of your synthetic data and have varying capabilities for addressing privacy metrics.</li>
<li>For cross-sectional data synthesis, I would definitely recommend using the shallow learning approach since it is highly computationally efficient and does a good job. If you are trying to improve a predictive model that isn't working so well for small demographic groups, you could use SMOTE or CTGAN to generate synthetic records for those groups which should improve the predictive accuracy. For longitudinal data I would recommend using CPAR when possible. It will synthesize unbalanced panels and balanced panels, while accomplishing the same with other approaches is significantly more challenging and time consuming.</li>
<li>If you need to generate synthetic data quickly, shallow learning methods are likely to be the only viable solution. The deep learning methods can compete in clock time, but to do so requires substantial hardware. If the synthesis doesn't need to be performed as quickly as possible, you can leverage the power/flexibility of deep learning approaches.</li>
<li>If providing synthetic data to outside organizations, you may want to consider setting up some type of validation server. It might just be analysts running code that was developed by external partners and then screening results for privacy policy requirements. While blending synthetic data with actual observations helps predictive models become more robust, you shouldn't rely on inference from synthetic data (particularly since you know it will differ from the protected data in some way).</li>
</ul>
</aside>
</section>
<section>
<h3 style="margin-top: -5%;">Who is using synthetic data?</h3>
<table style="font-size: 1.75rem;">
<thead>
<tr>
<th>Agency</th><th>Synthetic Product</th>
</tr>
</thead>
<tbody>
<tr>
<td>Census Bureau</td><td><a href="https://www.census.gov/programs-surveys/sipp/guidance/sipp-synthetic-beta-data-product.html" target="_blank">SIPP Synthetic Beta</a></td>
</tr>
<tr>
<td>Census Bureau</td><td><a href="https://www.census.gov/programs-surveys/ces/data/public-use-data/synthetic-longitudinal-business-database.html" target="_blank">Synthetic Longitudinal Business Database</a></td>
</tr>
<tr>
<td>CMS</td><td><a href="https://www.cms.gov/Research-Statistics-Data-and-Systems/Downloadable-Public-Use-Files/SynPUFs#:~:text=Medicare%20Claims%20Synthetic%20Public%20Use%20Files%20(SynPUFs)%20were%20created%20to,a%20smaller%20number%20of%20variables." target="_blank">Medicare Claims SynPUF</a></td>
</tr>
<tr>
<td>IRS</td><td><a href="https://www.urban.org/research/publication/synthetic-supplemental-public-use-file-low-income-information-return-data-methodology-utility-and-privacy-implications" target="_blank">Low-Income Info Returns SynPUF</a></td>
</tr>
<tr>
<td>IRS</td><td><a href="https://link.springer.com/chapter/10.1007/978-3-031-13945-1_14" target="_blank">Individual Tax Payer SynPUF</a></td>
</tr>
<tr>
<td>Veterans' Affairs</td><td><a href="https://github.com/department-of-veterans-affairs/PseudoVet" target="_blank">PseudoVet</a></td>
</tr>
<tr>
<td>Veterans' Health Administration</td><td><a href="https://www.mdclone.com/case-study-veterans-health-administration" target="_blank">MDClone</a></td>
</tr>
<tr>
<td>Maryland SLDS</td><td>See <a href="https://eric.ed.gov/?id=EJ1236531" target="_blank">Bonnéry et al. (2019)</a></td>
</tr>
</tbody>
</table>
<aside class="notes">
<p><b>If synthesis is allow with more restricted data, it shouldn't be a problem for less restricted data.</b></p>
<ul>
<li>In addition to these, there have also been some experiments in the education sector as well.</li>
<li>The importance of Census and IRS, however, is that unlike FERPA, the laws governing privacy in the Census - Title 13 of the US Code - and IRS - Title 26 of the US Code - have criminal and financial penalties associated with disclosure. If synthesis is good enough for these agencies to satisfy the higher standards of privacy protections they implement, it is good enough to satisfy the requirements of FERPA.</li>
<li>Similarly, the project implemented by the Veterans' Health Administration needs to satisfy HIPAA requirements for privacy which are also more stringent than FERPA.</li>
<li>While some of the projects were intended for public use/consumption, others are intended solely for internal or researcher use, at least currently.</li>
</ul>
</aside>
</section>
</section>
<!-- Slides for GAN Game here -->
<section>
<section>
<h2>Want to play a game?</h2>
<aside class="notes">
<p><b>Option for Code examples or Game. There are instructions on your tables and we'll walk through things too.</b></p>
<ul>
<li>To help you understand how the deep learning methods work, I've come up with a game where your tables/groups will become your own living AI.</li>
<li>Things will move a little fast in the game, but that's because humans are way better at one-shot learning compared to machines.</li>
<li>It will also move fast to better simulate some of what happens under the hood in deep learning models</li>
</ul>
</aside>
</section>
<section data-autoslide="30000">
<h3>The phases of the game</h3>
<ol>
<li>Draw phase</li>
<li>Evaluate phase</li>
<li>Telephone phase</li>
</ol>
<aside class="notes">
<p><b>Explain how the game works</b></p>
<ul>
<li>In the draw phase, the critic needs to keep their eyes closed. The rest of the group will take turns contributing to drawing a synthetic example of the image from your prompt.</li>
<li>Each "artist" will only have a short amount of time to draw before passing things on to the next artist.</li>
<li>Once the last "artist" in the group has added their drawing they will place the team's example and an example from the other group in front of the critic</li>
<li>During the evaluate phase, The critic's job is to determine which drawing is the real example (from the other team/group) and which is the synthetic drawing created by their table/group.</li>
<li>During the telephone phase, the critic whispers to the last artist their decision and explains why they made that decision. That artist then whispers the info to the artist that contributed before them, until it gets back to the first artist.</li>
<li>Once the telephone phase is complete, we start over again.</li>
</ul>
</aside>
</section>
<section>
<h3 style="">How GANs work</h3>
<img src="img/dlProcess.svg" style="margin-top: -2.5%; width: 50vw; height: 25vw;" alt="Image showing process of data flow for Generative Adversarial Networks">
<aside class="notes">
<p><b>Explain how the architecture works while walking through the diagram</b></p>
<ul>
<li>Two neural nets: a Generator and a Discriminator</li>
<li>Samples are synthesized by the generator</li>
<li>The discriminator evaluates which samples are real/fake</li>
<li>The goal of the discriminator is to get better at identifying synthetic data</li>
<li>The goal of the generator is to produce synthetic data that appears real</li>
<li>Each layer of the discriminator contributes a bit to the synthesis of the data. The final layer of the generator produces a sample with the same structure as the real data, but the properties are going to be bad at first.</li>
<li>The discriminator is basically a logistic regression model that is trying to correctly predict whether the samples are real/synthetic. The Goal is for the generator to create samples that are indistinguishable from real data, while the discriminator is trying to get better and better at identifying the synthetic samples.</li>
<li>After the discriminator does it's work, the error from the model is pushed back through the layers in reverse order to improve the quality of the synthetic samples.</li>
</ul>
</aside>
</section>
</section>
<section>
<section data-autoslide="3500">
<h1>Code Examples</h1>
</section>
<section data-autoslide="3500">
<h1>CART-Based Synthesis</h1>
</section>
<section>
<iframe src="mlSynthesisExample.html" style="width: 100vw; height: 500px;"></iframe>
<aside class="notes">
<p><b>Walk through the code</b></p>
</aside>
</section>
<section data-autoslide="3500">
<h1>CTGAN-Based Synthesis</h1>
</section>
<section>
<iframe src="ctganSynthesisExample.html" style="width: 75%; height: 500px;"></iframe>
<aside class="notes">
<p><b>Walk through the code</b></p>
</aside>
</section>
<section data-autoslide="3500">
<h1>CPAR-Based Synthesis</h1>
</section>
<section>
<iframe src="cparSynthesisExample.html" style="width: 75%; height: 500px;"></iframe>
<aside class="notes">
<p><b>Walk through the code</b></p>
</aside>
</section>
</section>
<section>
<h2>You've Got Questions,</h2>
<h2>I've <span style="font-size: 0.75rem;">(hopefully)</span> Got The Answers</h2>
</section>
<section>
<h1 style="color: #A51C30;">Thank You</h1><br>
<h2><a href="mailto:[email protected]?subject=Follow up From SDP 2023 Convening" style="color: black;">[email protected]</a></h2>
<p>Slides & Materials Available at: <br><a href="https://github.com/wbuchanan/sdpConvening2023" target="_blank">github.com/wbuchanan/sdpConvening2023</a></p>
<aside class="notes">
<p><b>Speaker notes available by pressing the s key when you view the slide deck</b></p>
</aside>
</section>
<section data-background-size="cover" data-background-image="https://sdp.cepr.harvard.edu/sites/hwpi.harvard.edu/files/styles/os_files_xxlarge/public/sdp/files/databuttons.jpg?m=1518027009&itok=onS4brki" data-background-size="100%">
<h1 style="color: white; -webkit-text-stroke: 1px black;">Thank You</h1>
</section>
</div>
</div>
<script src="dist/reveal.js"></script>
<script src="plugin/notes/notes.js"></script>
<script src="plugin/markdown/markdown.js"></script>
<script src="plugin/highlight/highlight.js"></script>
<script>
// More info about initialization & config:
// - https://revealjs.com/initialization/
// - https://revealjs.com/config/
Reveal.initialize({
hash: true,
controls: true,
progress: true,
slideNumber: "h/v",
preloadIframes: true,
autoSlideStoppable: false,
transitionSpeed: 'slow',
pdfSeparateFragments: false,
// Learn about plugins: https://revealjs.com/plugins/
plugins: [ RevealMarkdown, RevealHighlight, RevealNotes ]
});
</script>
</body>
</html>