Newer
Older
---
jupyter:
jupytext:
encoding: '# -*- coding: utf-8 -*-'
text_representation:
extension: .md
format_name: markdown
format_version: '1.3'
---
<div style="width: 100%;display: flex; align-items: top;">
<div style="float:left;width: 80%;text-align:left;position:relative">
<h1>Part 2: Privacy-aware data structure - Introduction to HyperLogLog</h1>
<p><strong>Workshop: Social Media, Data Analysis, & Cartograpy, WS 2023/24</strong></p>
<p><em><a href="mailto:alexander.dunkel@tu-dresden.de">Alexander Dunkel</a>
<br> Leibniz Institute of Ecological Urban and Regional Development,
Transformative Capacities & Research Data Centre & Technische Universität Dresden,
Institute of Cartography</em></p>
<p><img src="https://kartographie.geo.tu-dresden.de/ad/jupyter_python_datascience/version.svg" style="float:left"></p>
</div>
<div style="float: right;">
<div style="width:300px">
<img src="https://kartographie.geo.tu-dresden.de/ad/jupyter_python_datascience/FDZ-Logo_DE_RGB-blk_bg-tra_mgn-full_h200px_web.svg" style="position:relative;width:256px;margin-top:0px;margin-right:10px;clear: both;"/>
<img src="https://kartographie.geo.tu-dresden.de/ad/jupyter_python_datascience/TU_Dresden_Logo_blau_HKS41.svg" style="position:relative;width:256px;margin-top:0px;margin-right:10px;clear: both;"/>
</div>
</div>
</div>
This is the second notebook in a series of four notebooks:
1. Introduction to **Social Media data, jupyter and python spatial visualizations**
2. Introduction to **privacy issues** with Social Media data **and possible solutions** for cartographers
3. Specific visualization techniques example: **TagMaps clustering**
4. Specific data analysis: **Topic Classification**
Open these notebooks through the file explorer on the left side.
<div class="alert alert-warning" role="alert" style="color: black;">
<ul>
<li>For this notebook, please make sure that <code>02_hll_env</code> is shown on the
<strong>top-right corner</strong>. If not, click & select.</li>
</ul>
<details style="margin-left: 1em;"><summary style="cursor: pointer;"><strong>Link the environment for this notebook, if not already done.</strong></summary>Use this command in a notebook cell:
<pre><code>
!/projects/p_lv_mobicart_2324/hll_env/bin/python \
-m ipykernel install \
--user \
--name hll_env \
--display-name="02_hll_env"
</code></pre>
</details>
</div>
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
<div class="alert alert-info" role="alert" style="color: black;">
<details><summary><strong>Steep learning curve ahead</strong></summary>
<div style="width:500px">
<ul>
<li>Some of the code used in this notebook is more advanced, compared to the first notebook</li>
<li>We do not expect that you read / understand every step fully</li>
<li>Rather, we think it is critical to introduce a real-world analytics workflow,
covering current challenges and opportunities in cartographic data science
</li>
</ul>
</div>
</details>
</div>
<!-- #region -->
## Introduction: Privacy & Social Media
<br>
<div style="width:500px">
<strong>HLL in summary</strong>
<ul>
<li>HyperLogLog is used for estimation of the number of distinct items in a <code>set</code> (this is called cardinality estimation)</li>
<li>By providing only aproximate counts (with 3 to 5% inaccuracy), the overall data footprint and computing costs can be reduced significantly, providing benefits for both privacy and performance</li>
<li>A set with 1 Billion elements takes up only 1.5 kilobytes of memory</li>
<li>HyperLogLog Sets offer similar functionality as regular sets, such as:</li>
<ul>
<li>lossless union</li>
<li>intersection</li>
<li>exclusion</li>
</ul>
</ul>
</div>
<details><summary><strong>Background about HLL Research</strong></summary>
<div style="width:500px">
In recent years, user privacy has become an increasingly important consideration. Particularly when working
with VGI and Social Media data, analysts need to compromise between flexibility of analyses and increasing
vulnerability of collected (raw) data.
<br><br>
There exist many possible solutions to this problem. One approach is <strong>data minimization</strong>.
In a paper, we have specifically looked at options
to prevent collection of original data <i>at all</i>, in the context of spatial data, using a data abstraction
format called <a href="https://en.wikipedia.org/wiki/HyperLogLog">HyperLogLog</a>.
</div>
> **Dunkel, A., Löchner, M., & Burghardt, D. (2020).**
_Privacy-aware visualization of volunteered geographic information (VGI) to analyze spatial activity:
A benchmark implementation._ ISPRS International Journal of Geo-Information. [DOI][DOI-paper] / [PDF][PDF-paper]
<div style="width:500px">
Beyond privacy, HyperLogLog (HLL) is a modern and fast algorithm with many advantages, which is why it is used by (e.g.) <a href="https://research.google/pubs/pub40671/">Google</a>, <a href="https://engineering.fb.com/2018/12/13/data-infrastructure/hyperloglog/">Facebook</a> and <a href="https://de.slideshare.net/b0ris_1/audience-counting-at-scale">Apple</a> to make sense of increasing data collections.
<br><br>
</div>
[DOI-paper]: https://doi.org/10.3390/ijgi9100607
[PDF-paper]: https://www.mdpi.com/2220-9964/9/10/607/pdf
<!-- #endregion -->
## Basics
<details><summary><strong>Python-hll</strong></summary>
<div style="width:500px">
<ul>
<li>Many different HLL implementations exist </li>
<li>There is a <a href="https://github.com/AdRoll/python-hll">python library</a> available </li>
<li>The library is quite slow in comparison to the <a href="https://github.com/citusdata/postgresql-hll">Postgres HLL implementation</a> </li>
<li>we're using python-hll for demonstration purposes herein </li>
<li>the website <a href="https://lbsn.vgiscience.org/">lbsn.vgiscience.org</a> contains more examples that show how to use Postgres for HLL calculation in python. </li>
</ul>
</div>
</details>
### Introduction to HLL sets
<details><summary><strong>HyperLogLog Details</strong></summary>
<div style="width:500px">
<ul>
<li>A HyperLogLog (HLL) Set is used for counting distinct elements in the set.</li>
<li>For HLL to work, it is necessary to first <a href="https://en.wikipedia.org/wiki/Hash_function">hash</a> items</li>
<li>here, we are using <a href="https://en.wikipedia.org/wiki/MurmurHash">MurmurHash3</a></li>
<li>the hash function guarantees a predictably distribution of characters in the string,</li>
<li>which is required for the probabilistic estimation of count of items</li>
</ul>
</div>
</details>
Lets first see the regular approach of creating a set in python
and counting the unique items in the set:
**Regular set approach in python**
```python
user1 = 'foo'
user2 = 'bar'
# note the duplicate entries for user2
users = {user1, user2, user2, user2}
usercount = len(users)
print(usercount)
```
**HLL approach**
```python
from python_hll.hll import HLL
import mmh3
user1_hash = mmh3.hash(user1)
user2_hash = mmh3.hash(user2)
hll = HLL(11, 5) # log2m=11, regwidth=5
hll.add_raw(user1_hash)
hll.add_raw(user2_hash)
hll.add_raw(user2_hash)
hll.add_raw(user2_hash)
usercount = hll.cardinality()
print(usercount)
```
<details><summary><strong>log2m=11, regwidth=5 ?</strong></summary>
These values define some of the characteristics of the HLL set, which affect
(e.g.) how accurate the HLL set will be. A default register width of 5 (regwidth = 5),
with a log2m of 11 allows adding a maximum number of
\begin{align}1.6x10^{12}= 1600000000000\end{align}
items to a single set (with a margin of cardinality error of ±2.30%)
</details>
<!-- #region -->
HLL has two [modes of operations](https://github.com/citusdata/postgresql-hll/blob/master/REFERENCE.md#metadata-functions) that increase accuracy for small sets
- Explicit
- and Sparse
<div class="alert alert-info" role="alert" style="color: black;">
<details><summary><strong>Turn off explicit mode</strong></summary>
<br>
<div style="width:500px">
<p>Because Explicit mode stores Hashes fully,
it cannot provide any benefits for privacy,
which is why it should be disabled.</p>
</div>
</details>
</div>
Repeat the process above with explicit mode turned off:
<!-- #endregion -->
```python
hll = HLL(11, 5, 0, 1) # log2m=11, regwidth=5, explicit=off, sparse=auto)
hll.add_raw(user1_hash)
hll.add_raw(user2_hash)
hll.add_raw(user2_hash)
hll.add_raw(user2_hash)
usercount = hll.cardinality()
print(usercount)
```
**Union of two sets**
At any point, we can update a hll set with new items
(which is why HLL works well in streaming contexts):
```python
user3 = 'baz'
user3_hash = mmh3.hash(user3)
hll.add_raw(user3_hash)
usercount = hll.cardinality()
print(usercount)
```
.. but separate HLL sets may be created independently,
to be only merged finally for cardinality estimation:
```python
hll_params = (11, 5, 0, 1)
hll1 = HLL(*hll_params)
hll2 = HLL(*hll_params)
hll3 = HLL(*hll_params)
hll1.add_raw(mmh3.hash('foo'))
hll2.add_raw(mmh3.hash('bar'))
hll3.add_raw(mmh3.hash('baz'))
hll1.union(hll2) # modifies hll1 to contain the union
hll1.union(hll3)
usercount = hll1.cardinality()
print(usercount)
```
<div class="alert alert-info" role="alert" style="color: black;">
<details><summary><strong>Parallelized computation</strong></summary>
<br>
<div style="width:500px">
<ul>
<li>The lossless union of HLL sets allows parallelized computation</li>
<li>The inability to parallelize computation is one of the main limitations of regular sets, and it is typically referred to with
the <a href="https://en.wikipedia.org/wiki/Count-distinct_problem">Count-Distinct Problem</a></li>
</ul>
</div>
</details>
</div>
## Counting Examples: 2-Components
<div style="width:500px">
What is counted entirely depends on the application context.
Typically, this will result in a <strong>2-component setup</strong> with
<ul>
<li>the <strong>first component</strong> as a reference <i>for the count context, e.g.:</i></li>
<ul><li>coordinates, areas etc. (lat, lng)</li>
<li>terms</li>
<li>dates or times</li>
<li>groups/origins (e.g. different social networks)</li>
</ul>
<li>the <strong>second component</strong> as the HLL set, for counting different metrics, e.g.</li>
<ul><li>Post Count (PC)</li>
<li>User Count (UC)</li>
<li>User Days (PUC)</li>
</ul>
</ul>
</div>
<div class="alert alert-info" role="alert" style="color: black;">
<details><summary><strong>Further information</strong></summary>
<br>
<div style="width:500px"><ul>
<li>The above 'convention' for privacy-aware visual analytics has been published in <a href="https://www.mdpi.com/2220-9964/9/10/607">the paper referenced at the beginning of the notebook</a></li>
<li>for demonstration purposes, different examples of this 2-component structure are implemented <a href="https://gitlab.vgiscience.de/lbsn/structure/hlldb/-/blob/master/structure/98-create-tables.sql">in a Postgres database</a></li>
<li>more complex examples, such as composite metrics, allow for a large variety of visualizations</li>
<li> Adapting existing visualization techniques to the privacy-aware structure requires effort, <i>most</i> but not all techniques are compatible
</ul>
</div>
</details>
</div>
## YFCC100M Example: Monitoring of Worldwide User Days
<!-- #region -->
A **User Day** refers to a common metric used in visual analytics.
Each user is counted once per day.
This is commonly done by concatentation of a unique user identifier
and the unique day of activity, e.g.:
```python
userdays_set = set()
userday_sample = "96117893@N05" + "2012-04-14"
userdays_set.add(userday_sample)
print(len(userdays_set))
> 1
```
<!-- #endregion -->
We have create an example processing pipeline for counting user days world wide, using the [Flickr YFCC100M dataset](http://projects.dfki.uni-kl.de/yfcc100m/),
which contains about 50 Million georeferenced photos uploaded by Flickr users with a Creative Commons License.
The full processing pipeline can be viewed in [a separate collection of notebooks](https://gitlab.vgiscience.de/ad/yfcc_gridagg).
In the following, we will use the HLL data to replicate these visuals.
We'll use python methods stored and loaded from modules.
### Data collection granularity
There's a difference between collecting and visualizing data.
During data collection, information can be stored with a higher
information granularity, to allow _some_ flexibility for
tuning visualizations.
In the YFCC100M Example, we "collect" data at a GeoHash granularity of 5
(about 3 km "snapping distance" for coordinates).
During data visualization, these coordinates and HLL sets are aggregated
further to a worldwide grid of 100x100 km bins.
Have a look at the data structure at data collection time.
```python
from pathlib import Path
OUTPUT = Path.cwd() / "out"
OUTPUT.mkdir(exist_ok=True)
TMP = Path.cwd() / "tmp"
TMP.mkdir(exist_ok=True)
```
```python
%load_ext autoreload
%autoreload 2
```
```python
import sys
module_path = str(Path.cwd().parents[0] / "py")
if module_path not in sys.path:
sys.path.append(module_path)
from modules import tools
```
Load the full benchmark dataset.
```python
filename = "yfcc_latlng.csv"
yfcc_input_csv_path = TMP / filename
if not yfcc_input_csv_path.exists():
sample_url = tools.get_sample_url()
yfcc_csv_url = f'{sample_url}/download?path=%2F&files={filename}'
tools.get_stream_file(url=yfcc_csv_url, path=yfcc_input_csv_path)
```
Load csv data to pandas dataframe.
```python
%%time
import pandas as pd
dtypes = {'latitude': float, 'longitude': float}
df = pd.read_csv(
yfcc_input_csv_path, dtype=dtypes, encoding='utf-8')
print(len(df))
```
The dataset contains a total number of 451,949 distinct coordinates,
at a GeoHash precision of 5 (~2500 Meters snapping distance.)
```python
df.head()
```
**Calculate a single HLL cardinality (first row):**
```python
sample_hll_set = df.loc[0, "date_hll"]
```
```python
from python_hll.util import NumberUtil
hex_string = sample_hll_set[2:]
print(sample_hll_set[2:])
hll = HLL.from_bytes(NumberUtil.from_hex(hex_string, 0, len(hex_string)))
```
```python
hll.cardinality()
```
The two components of the structure are highlighted below.
```python
tools.display_header_stats(
df.head(),
base_cols=["latitude", "longitude"],
metric_cols=["date_hll"])
```
<p>The color refers to the two components:</p>
<div style="text-align:center;width:100%;">
<div style="background:#8FBC8F;width:220px;display:inline-block;vertical-align: top;*display:inline;padding:5px;">
<b>1</b> - The (spatial) <i>context</i> for HLL sets (called the 'base' in <a href="https://lbsn.vgiscience.org/concept-structure/">lbsn structure</a>)
</div>
<div style="background:#FFF8DC;width:220px;display:inline-block;vertical-align: top;*display:inline;padding:5px;">
<b>2</b> - The HLL set (called the 'overlay')
</div>
</div>
<div class="alert alert-info" role="alert" style="color: black;">
<details><summary><strong>Compare RAW data</strong></summary>
<div style="width:500px"><ul>
<li>Unlike RAW data, the user-id and the distinct dates are stored in the HLL sets above (date_hll)</li>
<li>When using RAW data, storing the user-id and date, to count userdays, would also mean<br>
that each user can be tracked across different locations and times</li>
<li>HLL allows to prevent such misuse of data.</li>
</ul>
</div>
</details>
</div>
### Data visualization granularity
- there're many ways to visualize data
- typically, visualizations will present
information at a information granularity
that is suited for the specific application
context
- To aggregate information from HLL data,
individual HLL sets need to be merged
(a union operation)
- For the YFCC100M Example, the process
to union HLL sets is shown [here](https://ad.vgiscience.org/yfcc_gridagg/03_yfcc_gridagg_hll.html)
- We're going to load and visualize this
aggregate data below
```python
from modules import yfcc
```
```python
filename = "yfcc_all_est_benchmark.csv"
yfcc_benchmark_csv_path = TMP / filename
if not yfcc_benchmark_csv_path.exists():
yfcc_csv_url = f'{sample_url}/download?path=%2F&files={filename}'
tools.get_stream_file(
url=yfcc_csv_url, path=yfcc_benchmark_csv_path)
```
```python
grid = yfcc.grid_agg_fromcsv(
yfcc_benchmark_csv_path,
columns=["xbin", "ybin", "userdays_hll"])
```
```python
grid[grid["userdays_hll"].notna()].head()
```
```python
tools.display_header_stats(
grid[grid["userdays_hll"].notna()].head(),
base_cols=["geometry"],
metric_cols=["userdays_hll"])
```
<div class="alert alert-info" role="alert" style="color: black;">
<details><summary><strong>Description of columns</strong></summary>
<div style="width:500px"><ul>
<li><strong>geometry</strong>: A WKT-Polygon for the area (100x100km bin)</li>
<li><strong>userdays_hll</strong>: The HLL set, containing all userdays measured for the respective area</li>
<li><strong>xbin/ybin</strong>: The DataFrame (multi-) index, each 100x100km bin has a unique x and y number.</li>
</ul>
</div>
</details>
</div>
**Calculate the cardinality for all bins and store in extra column:**
```python
def hll_from_byte(hll_set: str):
"""Return HLL set from binary representation"""
hex_string = hll_set[2:]
return HLL.from_bytes(
NumberUtil.from_hex(
hex_string, 0, len(hex_string)))
```
```python
def cardinality_from_hll(hll_set, total, ix=[0]):
"""Turn binary hll into HLL set and return cardinality"""
ix[0] += 1
loaded = ix[0]
hll = hll_from_byte(hll_set)
if (loaded % 100 == 0) or (total == loaded):
tools.stream_progress_basic(
total, loaded)
return hll.cardinality() - 1
```
<div class="alert alert-info" role="alert" style="color: black;">
<details><summary><strong>Progress reporting in Jupyter</strong></summary>
<div style="width:500px">
<ul>
<li><code>tools.stream_progress_basic()</code>: For long running processes, progress should be reported.</li>
<li>Have a look at the function above, defined in <code>/py/modules/tools.py</code></li>
<li><code>ix=[0]</code>? Defines a muteable kwarg, which gets allocated once, for the function, and is then used it to keep track of the progress</li>
<li><code>loaded % 100 == 0</code>? The % is the modulo operator, which is used to limit update ferquency to every 100th step (where the modulo evaluates to 0)</li>
</ul>
</div>
</details>
</div>
Calculate cardinality for all bins.
<div class="alert alert-info" role="alert" style="color: black;">
This process will take some time (about 3-5 Minutes),
due to using the slow(er) python-hll implementation.
</div>
```python
%%time
grid_cached = Path(TMP / "grid.pkl")
if grid_cached.exists():
grid = pd.read_pickle(grid_cached)
else:
mask = grid["userdays_hll"].notna()
grid["userdays_est"] = 0
total = len(grid[mask].index)
grid.loc[mask, 'userdays_est'] = grid[mask].apply(
lambda x: cardinality_from_hll(
x["userdays_hll"], total),
axis=1)
```
<div class="alert alert-info" role="alert" style="color: black;"><details><summary><strong>RuntimeWarning?</strong> </summary>
<div style="width:500px"><ul>
<li><a href="https://github.com/AdRoll/python-hll">python-hll library</a> is in a very early stage of development</li>
<li>it is not fully compatible with the <a href="https://github.com/citusdata/postgresql-hll">citus hll implementation</a> in postgres</li>
<li>The shown <a href="https://tech.nextroll.com/blog/dev/2019/10/01/hll-in-python.html">RuntimeWarning (Overflow)</a> is one of the issues that need to be resolved in the future</li>
<li>If you run this notebook locally, it is recommended to use <a href="https://gitlab.vgiscience.de/lbsn/databases/pg-hll-empty">pg-hll-empty</a> for
any hll calculations, as is shown (e.g.) in the original <a href="https://gitlab.vgiscience.de/ad/yfcc_gridagg">YFCC100M notebooks</a>.</li>
</ul>
</div>
</details>
</div>
<div class="alert alert-info" role="alert" style="color: black;"><details><summary><code>grid[mask].apply()</code>?</summary>
<div style="width:500px"><ul>
<li>This is another example of boolean masking with pandas</li>
<li><code>grid["userdays_hll"].notna()</code> creates a list (a <code>pd.Series</code>) of True/False values</li>
<li><code>grid.loc[mask, 'userdays_est']</code> uses the index of the mask, to select indexes, and the column 'userdays_est', to assign values</li>
</ul>
</div>
</details></div>
From now on, disable warnings:
```python
import warnings
warnings.filterwarnings('ignore')
```
Write a `pickle` of the dataframe, to cache for repeated use:
```python
if not grid_cached.exists():
grid.to_pickle(grid_cached)
```
**Have a look at the cardinality below.**
```python
grid[grid["userdays_hll"].notna()].head()
```
#### Visualize the grid, using prepared methods
Temporary fix to prevent proj-path warning:
```python
import sys, os
os.environ["PROJ_LIB"] = str(Path(sys.executable).parents[1] / 'share' / 'proj')
```
Activate the bokeh holoviews extension.
```python
from modules import grid as yfcc_grid
import holoviews as hv
hv.notebook_extension('bokeh')
```
.. visualize the grid, **as an interactive map**, shown in the notebook:
```python
gv_layers = yfcc_grid.plot_interactive(
grid, title=f'YFCC User Days (estimated) per 100 km grid',
metric="userdays_est")
```
```python
gv_layers
```
.. or, **store as an external HTML file**, to be viewed separately (note the `output=OUTPUT` that enabled HTML export):
```python
yfcc_grid.plot_interactive(
grid, title=f'YFCC User Days (estimated) per 100 km grid', metric="userdays_est",
store_html="yfcc_userdays_est", output=OUTPUT)
```
<div class="alert alert-warning" role="alert" style="color: black;">
<details><summary><strong>Open HTML</strong></summary>
<ul>
<li>go to notebooks/out</li>
<li>.. and open yfcc_userdays_est.html with <br><strong>Right-Click > Open In New Browser-Tab</strong></li>
</ul>
</details>
</div>
## Working with HLL data: Intersection Example
HLL is not pure statistic data.
There is _some_ flexibility to explore HLL sets further,
by using the **union** and **intersection** functionality.
We're going to explore this functionality below.
The task is to union all HLL sets for userdays for:
- Germany
- France
- UK
.. and finally visualizing total **user counts** for these countries
and the subset of users that have visited two or all of these
countries.
<div class="alert alert-info" role="alert" style="color: black;">
<details><summary><strong>Python GIS Operations</strong></summary>
<div style="width:500px"><ul>
<li>The code below is not any more complex than working with RAW data</li>
<li>We'll learn how to use some common GIS operations in python below</li>
</ul>
</div>
</details>
</div>
<div class="alert alert-info" role="alert" style="color: black;">
<details><summary><strong>Why user counts, and not user days?</strong></summary>
<div style="width:500px"><ul>
<li>Userdays (eg. in the form of <code>user-id||date</code>) are not suited to study visitation intersection between countries.</li>
<li>In other words, it is unlikely that one user has visited more than one country on a single day.</li>
<li>No or little intersection would be found by using user days.</li>
<li>Using hashed user ids (converted to HLL) instead allows to count the users having visited two or more countries</li>
</ul>
</div>
</details>
</div>
Load user hll sets:
```python
grid = yfcc.grid_agg_fromcsv(
TMP / "yfcc_all_est_benchmark.csv",
columns=["xbin", "ybin", "usercount_hll"])
```
Preview:
```python
grid[grid["usercount_hll"].notna()].head()
```
### Union hll sets for Countries UK, DE and FR
#### Selection of grid cells based on country geometry
Load country geometry:
```python
import geopandas as gp
world = gp.read_file(
gp.datasets.get_path('naturalearth_lowres'),
crs=yfcc.CRS_WGS)
world = world.to_crs(
yfcc.CRS_PROJ)
```
<div class="alert alert-info" role="alert" style="color: black;"><details><summary><strong><code>gp.datasets.get_path()</code>?</strong></summary>
<ul>
<li>Some data is provided by Geopandas</li>
<li>One of these datasets is the <a href="https://www.naturalearthdata.com/downloads/">natural earth lowres countries</a> shapefile</li>
<li>.. but you can load any Shapefile or GIS data here.</li>
</ul>
</details></div>
Select geometry for DE, FR and UK
```python
de = world[world['name'] == "Germany"]
uk = world[world['name'] == "United Kingdom"]
fr = world[world['name'] == "France"]
```
<div class="alert alert-warning" role="alert" style="color: black;">
<details><summary><strong>Select different countries</strong></summary>
<ul>
<li>Optionally: Modify the list of countries to adapt visualizations below.</li>
</ul>
</details>
</div>
Drop French territory of French Guiana:
```python
fr = fr.explode().iloc[1:].dissolve(by='name')
fr.plot()
```
**Preview selection.**
Note that the territory of France includes Corsica,
which is acceptable for the example use case.
```python
import matplotlib.pyplot as plt
fig, (ax1, ax2, ax3) = plt.subplots(1, 3)
fig.suptitle(
'Areas to test for common visitors in the hll benchmark dataset')
for ax in (ax1, ax2, ax3):
ax.set_axis_off()
ax1.title.set_text('DE')
ax2.title.set_text('UK')
ax3.title.set_text('FR')
de.plot(ax=ax1)
uk.plot(ax=ax2)
fr.plot(ax=ax3)
```
#### Intersection with grid
Since grid size is 100 km,
direct intersection will yield some error rate (in this case, called _[MAUP](https://en.wikipedia.org/wiki/Modifiable_areal_unit_problem)_).
Use centroid of grid cells to select bins based on country geometry.
Get centroids as Geoseries and turn into GeoDataFrame:
```python
centroid_grid = grid.centroid.reset_index()
centroid_grid.set_index(["xbin", "ybin"], inplace=True)
```
```python
grid.centroid
```
Define function to intersection, using geopandas [sjoin (spatial join)](https://geopandas.org/reference/geopandas.sjoin.html)
```python
from geopandas.tools import sjoin
def intersect_grid_centroids(
grid: gp.GeoDataFrame,
intersect_gdf: gp.GeoDataFrame):
"""Return grid centroids from grid that
intersect with intersect_gdf
"""
centroid_grid = gp.GeoDataFrame(
grid.centroid)
centroid_grid.rename(
columns={0:'geometry'},
inplace=True)
centroid_grid.set_geometry(
'geometry', crs=grid.crs,
inplace=True)
grid_intersect = sjoin(
centroid_grid, intersect_gdf,
how='right')
grid_intersect.set_index(
["index_left0", "index_left1"],
inplace=True)
grid_intersect.index.names = ['xbin','ybin']
return grid.loc[grid_intersect.index]
```
Run intersection for countries:
```python
grid_de = intersect_grid_centroids(
grid=grid, intersect_gdf=de)
grid_de.plot(edgecolor='white')
```
```python
grid_fr = intersect_grid_centroids(
grid=grid, intersect_gdf=fr)
grid_fr.plot(edgecolor='white')
```
```python
grid_uk = intersect_grid_centroids(
grid=grid, intersect_gdf=uk)
grid_uk.plot(edgecolor='white')
```
#### Plot preview of selected grid cells (bins)
Define colors:
```python
color_de = "#fc4f30"
color_fr = "#008fd5"
color_uk = "#6d904f"
```
Define map boundary:
```python
bbox_europe = (
-9.580078, 41.571384,
16.611328, 59.714117)
minx, miny = yfcc.PROJ_TRANSFORMER.transform(
bbox_europe[0], bbox_europe[1])
maxx, maxy = yfcc.PROJ_TRANSFORMER.transform(
bbox_europe[2], bbox_europe[3])
buf = 100000
```
```python
from typing import List, Optional
def plot_map(
grid: gp.GeoDataFrame, sel_grids: List[gp.GeoDataFrame],
sel_colors: List[str],
title: Optional[str] = None, save_fig: Optional[str] = None,
ax = None, output: Optional[Path] = OUTPUT):
"""Plot GeoDataFrame with matplotlib backend, optionaly export as png"""
if not ax:
fig, ax = plt.subplots(1, 1, figsize=(5, 6))
ax.set_xlim(minx-buf, maxx+buf)
ax.set_ylim(miny-buf, maxy+buf)
if title:
ax.set_title(title, fontsize=12)
for ix, sel_grid in enumerate(sel_grids):
sel_grid.plot(
ax=ax,
color=sel_colors[ix],
edgecolor='white',
alpha=0.9)
grid.boundary.plot(
ax=ax,
edgecolor='black',
linewidth=0.1,
alpha=0.9)
# combine with world geometry
world.plot(
ax=ax, color='none', edgecolor='black', linewidth=0.3)
# turn axis off
ax.set_axis_off()
if not save_fig:
return
fig.savefig(output / save_fig, dpi=300, format='PNG',
bbox_inches='tight', pad_inches=1)
```
```python
sel_grids=[grid_de, grid_uk, grid_fr]
sel_colors=[color_de, color_uk, color_fr]
plot_map(
grid=grid, sel_grids=sel_grids,
sel_colors=sel_colors,
title='Grid selection for DE, FR and UK',
save_fig='grid_selection_countries.png')
```
### Union of hll sets
```python
def union_hll(hll: HLL, hll2):
"""Union of two HLL sets. The first HLL set will be modified in-place."""
hll.union(hll2)
def union_all_hll(
hll_series: pd.Series, cardinality: bool = True) -> pd.Series:
"""HLL Union and (optional) cardinality estimation from series of hll sets
Args:
hll_series: Indexed series (bins) of hll sets.
cardinality: If True, returns cardinality (counts). Otherwise,
the unioned hll set will be returned.
"""
hll_set = None
for hll_set_str in hll_series.values.tolist():
if hll_set is None:
# set first hll set
hll_set = hll_from_byte(hll_set_str)
continue
hll_set2 = hll_from_byte(hll_set_str)
union_hll(hll_set, hll_set2)
return hll_set.cardinality()
```
Calculate distinct users per country:
```python
grid_sel = {
"de": grid_de,
"uk": grid_uk,
"fr": grid_fr
}
distinct_users_total = {}
for country, grid_sel in grid_sel.items():
# drop bins with no values
cardinality_total = union_all_hll(
grid_sel["usercount_hll"].dropna())
distinct_users_total[country] = cardinality_total
print(
f"{distinct_users_total[country]} distinct users "
f"who shared YFCC100M photos in {country.upper()}")
```
### Calculate intersection (common visitors)
<!-- #region -->
According to the [Union-intersection-principle](https://en.wikipedia.org/wiki/Inclusion%E2%80%93exclusion_principle):
$|A \cup B| = |A| + |B| - |A \cap B|$
which can also be written as:
$|A \cap B| = |A| + |B| - |A \cup B|$
Therefore, unions can be used to calculate intersection. Calculate $|DE \cup FR|$, $|DE \cup UK|$ and $|UK \cup FR|$, i.e.:
```python
IntersectionCount =
hll_cardinality(grid_de)::int +
hll_cardinality(grid_fr)::int -
hll_cardinality(hll_union(grid_de, grid_fr)
```
<!-- #endregion -->
**First, prepare combination for different sets.**