-
Notifications
You must be signed in to change notification settings - Fork 0
/
inactive_learning.html
476 lines (261 loc) · 26.7 KB
/
inactive_learning.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
<!DOCTYPE html>
<html lang="en">
<link href='https://fonts.googleapis.com/css?family=Raleway' rel='stylesheet'>
<head>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>Inactive Learning?</title>
<meta name="description" content="">
<link href="https://fonts.googleapis.com/css?family=EB+Garamond" rel="stylesheet">
<link href="https://fonts.googleapis.com/css?family=Alegreya" rel="stylesheet">
<link href="https://fonts.googleapis.com/css?family=Libre+Baskerville" rel="stylesheet">
<link href="https://fonts.googleapis.com/css?family=Raleway:300" rel="stylesheet" type='text/css'>
<link href="https://fonts.googleapis.com/css?family=PT+Serif|Work+Sans:300" rel="stylesheet">
<link rel="preconnect" href="https://fonts.googleapis.com">
<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>
<link href="https://fonts.googleapis.com/css2?family=EB+Garamond:ital,wght@0,400;0,500;0,800;1,800&family=Raleway:wght@100&display=swap" rel="stylesheet">
<link href="https://fonts.googleapis.com/css2?family=Libre+Baskerville&display=swap" rel="stylesheet">
<link rel="preconnect" href="https://fonts.googleapis.com">
<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>
<link href="https://fonts.googleapis.com/css2?family=Zilla+Slab:wght@400&display=swap" rel="stylesheet">
<link rel="stylesheet" href="//maxcdn.bootstrapcdn.com/font-awesome/4.3.0/css/font-awesome.min.css">
<link rel="stylesheet" href="/assets/main.css">
<link rel="canonical" href="https://blog.quipu-strands.com/inactive_learning">
<link rel="alternate" type="application/rss+xml" title="A Not-So Primordial Soup" href="/feed.xml">
<script data-goatcounter="https://abhgh.goatcounter.com/count"
async src="//gc.zgo.at/count.js"></script>
</head>
<body>
<header class="site-header" role="banner">
<div class="wrapper">
<a class="site-title" href="/">A Not-So Primordial Soup</a>
<nav class="site-nav">
<input type="checkbox" id="nav-trigger" class="nav-trigger" />
<label for="nav-trigger">
<span class="menu-icon">
<svg viewBox="0 0 18 15" width="18px" height="15px">
<path fill="#424242" d="M18,1.484c0,0.82-0.665,1.484-1.484,1.484H1.484C0.665,2.969,0,2.304,0,1.484l0,0C0,0.665,0.665,0,1.484,0 h15.031C17.335,0,18,0.665,18,1.484L18,1.484z"/>
<path fill="#424242" d="M18,7.516C18,8.335,17.335,9,16.516,9H1.484C0.665,9,0,8.335,0,7.516l0,0c0-0.82,0.665-1.484,1.484-1.484 h15.031C17.335,6.031,18,6.696,18,7.516L18,7.516z"/>
<path fill="#424242" d="M18,13.516C18,14.335,17.335,15,16.516,15H1.484C0.665,15,0,14.335,0,13.516l0,0 c0-0.82,0.665-1.484,1.484-1.484h15.031C17.335,12.031,18,12.696,18,13.516L18,13.516z"/>
</svg>
</span>
</label>
<div class="trigger">
<a class="page-link" href="/about/">About</a>
<a class="page-link" href="/Terms/">Terms</a>
</div>
</nav>
</div>
</header>
<main class="page-content" aria-label="Content">
<div class="wrapper">
<article class="post" itemscope itemtype="http://schema.org/BlogPosting">
<header class="post-header">
<h1 class="post-title" itemprop="name headline">Inactive Learning?</h1>
<p class="post-meta">
<time datetime="2024-09-25T12:00:00-07:00" itemprop="datePublished">
Created: Sep 25, 2024. Last major update:
Sep 25, 2024.
</time>
</p>
</header>
<div class="post-content" itemprop="articleBody">
<script type="text/x-mathjax-config">
MathJax.Hub.Config({
"HTML-CSS": { scale: 100, linebreaks: { automatic: true } },
SVG: { linebreaks: { automatic:true } },
displayAlign: "center" });
</script>
<script type="text/javascript" async="" src="https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML">
</script>
<p>I totally stole the title from a paper <a class="citation" href="#10.1145/1964897.1964906">(Attenberg & Provost, 2011)</a>.</p>
<p>In theory, <em>Active Learning (AL)</em> is a tremendous idea. You need labeled data, but your kind of labeling comes at a cost, e.g., you need to obtain them from a domain expert. Now, lets say, your goal is to use this labeled data to train a classifier that gets to a held-out accuracy of \(90\%\). If you randomly sampled points to label, you might require \(1000\) points. Active Learning lets you <em>strategically</em> pick just \(500\) points for labeling, to reach the same accuracy. Half the labeling cost for the same outcome. This is great!</p>
<p>Except that in a lot of real-world cases this is not how it plays out. I suspected this from my personal experiments, and then in some stuff we did at <a href="https://www.247.ai/">[24]7.ai</a>. So we decided to thoroughly test out multiple scenarios in text classification, where you believe (or current literature leads us to believe) Active Learning <em>should</em> work … but it just doesn’t. We summarized our observations into the paper <em>“On the Fragility of Active Learners for Text Classification”</em> <a class="citation" href="#fragilityActive">(Ghose & Nguyen, 2024)</a> [<a href="https://arxiv.org/pdf/2403.15744">PDF</a>], and that is what I’d refer to you for details. This post is part overview and part thoughts not in the paper. Here’s the layout:</p>
<ul id="markdown-toc">
<li><a href="#what-do-we-expect-to-see" id="markdown-toc-what-do-we-expect-to-see">What do we expect to see?</a></li>
<li><a href="#what-do-we-see" id="markdown-toc-what-do-we-see">What do we see?</a></li>
<li><a href="#here-be-dragons" id="markdown-toc-here-be-dragons">Here be Dragons</a></li>
<li><a href="#acknowledgements" id="markdown-toc-acknowledgements">Acknowledgements</a></li>
<li><a href="#references" id="markdown-toc-references">References</a></li>
</ul>
<h2 id="what-do-we-expect-to-see">What do we expect to see?</h2>
<p>OK, just for an idea, what would be an example of an AL technique? Let’s look at one of the earliest ones: <em>Uncertainty Sampling</em> <a class="citation" href="#uncertainty_sampling">(Lewis & Gale, 1994)</a>. Here you pick points to be labeled in <em>batches</em>. You kick-off with a random batch (also known as the “seed” set), label them and train a classifier. Next, you use this classifier to predict labels of the unlabeled points. You note the <em>confidence</em> of each prediction, and pick the points for which the confidences were the lowest, or equivalently, the <em>uncertainty</em> was the greatest. This is the batch you now label. Rinse and repeat. We’ll often refer to an AL technique by its other moniker, a <em>Query Strategy (QS)</em>, which comes from the fact that it is used to <em>query</em> points for labeling.</p>
<p>The idea is that by supplying to the classifier the points it is most wrong about, you force it to improve faster. This view has problems, and we’ll get to it in a while. But if it worked - if any AL technique worked - here’s what we would expect to see:</p>
<!-- _includes/image.html -->
<div class="image-wrapper">
<img src="/assets/active_learning/basic_plot.png" alt="test" />
<p class="image-caption">All AL curves, in an ideal world.</p>
</div>
<p>The F1-scores on the y-axis are reported over a held-out set, i.e., a test set with labels. This is <em>not available</em> in a real-world setting; because if you had labels for a significantly large test dataset, in a world where labeling is expensive you will use it (or part of it) for training, thus negating the need for using AL. The batch size in the plot’s setup is \(100\) (which is not very important here - its just indicative).</p>
<p>Note that:</p>
<ul>
<li>The initial accuracies are identical - or in practice, quite similar. This is because the seed set is typically randomly picked (true in this illustration).</li>
<li>As labeled data increases, both curves converge to the same accuracy (or similar accuracies in practice). This is bound to happen since beyond a critical mass of points (collected via AL or randomly) the classifier has seen all patterns in the data, and will be able to generalize well.</li>
</ul>
<h2 id="what-do-we-see">What do we see?</h2>
<p>A learning setup can vary wrt multiple things: the dataset, the classifier family (something traditional like <em>Random Forests</em> vs a recent one like <em>RoBERTa</em>) and the text representation (so many embeddings to pick from, e.g., <em>MPNet</em>, <em>USE</em>). You’re thrown into such a setup, and you have no labeled data, but you have read about this cool new AL technique - would you expect it to work?</p>
<p>This is the aspect of AL that we explored. The figure below - taken from the paper - shows the cross-product of the different factors we tested. In all, there are \(350\) experiment settings. Note that RoBERTa is an end-to-end model, so in its case, both the “Representation” and “Classifier” are identical. Not counting random sampling, we tested out \(4\) query strategies (right-most box below), some traditional (“Margin” is a form of Uncertainty Sampling), some new.</p>
<!-- _includes/image.html -->
<div class="image-wrapper">
<img src="/assets/active_learning/experiment_viz.png" alt="test" />
<p class="image-caption">Factors that were varied in our experiments. Figure 1 in the paper.</p>
</div>
<p>And below are the results we observe. Each row is a classifier+representation combination. There are 5 heatmaps each corresponding to the number of labeled points, e.g., \(1000, 2000, ..., 5000\). Within a heatmap, each column represents a query strategy. Each cell shows the <strong>percentage relative improvement</strong> of a non-random query strategy over random sampling (this is why we have only \(4\) columns in each heatmap). The acronyms are declared in the previous image. The results are averaged over multiple datasets and trials.</p>
<!-- _includes/image.html -->
<div class="image-wrapper">
<img src="/assets/active_learning/AL_results.png" alt="test" />
<p class="image-caption">Observations. Figure 3 in the paper.</p>
</div>
<p>Here are the key takeaways:</p>
<ol>
<li>A lot of the percentage relative improvements are negative.</li>
<li>As the number of labeled points increase, they tend towards positive. But remember that having a lot of data - even randomly picked - is bound to increase classifier performance anyway; so even for the positive-improvement cases the <em>relative improvements over random</em> are quite small.</li>
<li>Different setups seem to also have different “warm-up” times, i.e., the number of labeled examples at which the transition from red to green improvement occurs. There is no way to predict these warm-up times by looking at the setup; so you get what you get.</li>
<li>RoBERTa seems to register positive improvements for most setups, but this is not a high number; its close to \(1\%\).</li>
</ol>
<p>In short, at low labeled-data regimes, AL does poorly; this is unfortunate, since this is when you want AL to actually work. AL has diminished utility at larger data regimes, because the performance of random sampling <em>also improves</em> just because we have more data. And we see that in the plot above: at high labeled-data regimes, when the relative improvements are positive they are low. If I were to distill these trends as a line plot, I’d say the average behavior of AL seems closer to this:</p>
<!-- _includes/image.html -->
<div class="image-wrapper">
<img src="/assets/active_learning/bad_AL.png" alt="test" />
<p class="image-caption">The average behavior of AL we see empirically.</p>
</div>
<h2 id="here-be-dragons">Here be Dragons</h2>
<p>So why the discrepancy between theory and practice? Let’s start by appreciating that AL is a particularly hard problem. Consider what you are up against: you are trying to build a <em>supervised</em> model <em>while</em> you are soliciting the supervision, i.e., labels. You will only know you were right or wrong about your pick of the AL technique <em>after</em> you have labeled enough data (because you can then simulate random sampling and see how much its trajectory is better or worse). But this is too late because your labeling budget is already expended. AL problems are unforgiving because you need to get it right in one-shot; using a classifier and representation that is likely different from what was used in reporting the original results, on a dataset that was definitely not used (if its a problem you’re solving in the industry) by the original literature. The AL problem is a model <em>transferability</em> problem in disguise.</p>
<p>To give you an example of the challenges, let’s revisit Uncertainty Sampling and consider a toy problem (this is from <a class="citation" href="#10.1145/1390156.1390183">(Dasgupta & Hsu, 2008)</a>, and the figures are from my <a href="https://drive.google.com/file/d/1zf_MIWyLY7nxEr5UioUQ7KhOQ1_clYYl/view?usp=drive_link">PhD thesis</a>). In the figure below, we have a one-dimensional dataset (shown on a line, with the rectangles showing their density) with two labels, red and green. Note the distribution - two big chunks P and Q, and some smaller chunks. Our goal is to learn a classifier, which here is essentially a point on the number line, and the classification logic is that all data to its left will be classified as “green”, and data to its right would be classified as “red”.</p>
<!-- _includes/image.html -->
<div class="image-wrapper">
<img src="/assets/active_learning/sampling_bias_problem.png" alt="test" />
<p class="image-caption">1D toy dataset, where the distribution is shown with rectangles.</p>
</div>
<p>What’s the location of the best classifier? Its location B: the only misclassified points are in the thin green slice of points towards its right. That slice has a width of \(2.5\%\), so that’s the best classification error in this setup.</p>
<p>Now consider the AL setting where we don’t have any label information. In the first iteration, we randomly sample a batch of points. Its highly likely that we would end up sampling points from the larger chunks P and Q. So we get them labeled, and learn a classifier, which is going to be near C - right at the middle, with the good stuff like maximized margins. Now we compute the uncertainty values for all points. The plot below shows these values.</p>
<!-- _includes/image.html -->
<div class="image-wrapper">
<img src="/assets/active_learning/sampling_bias_unc.png" alt="test" />
<p class="image-caption">Uncertainty scores of predictions on the input space.</p>
</div>
<p>The greatest uncertainty is right in the middle of P and Q, since this represents the classifier’s boundary. In the next iteration, we sample points around C. The slim chunks of points around C match what the classifier has already seen, and it cannot know (with this sampling strategy) that there is a small red chunk, with width \(5\%\) a little further out to its left. So the classification boundary doesn’t change, and C’s uncertainty view is reinforced in future iterations, leading to cascaded sampling bias. Classifier C has an error of \(5\%\).</p>
<!-- _includes/image.html -->
<div class="image-wrapper">
<img src="/assets/active_learning/sampling_bias_errors.png" alt="test" />
<p class="image-caption">Error regions for each classifier.</p>
</div>
<p>Now, this is a contrived example, and you can argue you can come up with a better heuristic to deal with this specific case (these are precisely the kind of problems AL research tries to address)- but the larger point is that there is going to be this pesky issue of dealing with unknowns: the dataset, classifier, representation, etc.</p>
<p>There is obviously a lot to say on this topic, but I’ll try to summarize my thoughts below:</p>
<ol>
<li>AL is unforgiving because you see results reported from one setup, and based on that you <em>hope</em> it will work for <em>your setup</em>. And you have one-shot to get it right. We need ways to scientifically evaluate similarity in setups or applicability of a technique to a new setup - these are lacking in the area today.</li>
<li>The focal point of much of AL research has been the query strategy, but our experiments (and there have been others before us, including the paper whose title we have re-used) tell us it is sensitive to other factors in non-trivial ways. We think that there needs to be much broader discourse around what specific problems AL should be attacking. Hint: it can’t be an isolated view of the query strategy alone.</li>
<li>We need better benchmarking. When we went around looking for benchmarking codes or AL libraries, often we found that:
<ol>
<li>There is no model selection performed on the classifier that is learned in every iteration. Library defaults get used. Future researchers, there is nothing magical about <code class="language-plaintext highlighter-rouge">C=1</code> in scikit’s <a href="https://scikit-learn.org/stable/modules/generated/sklearn.svm.LinearSVC.html">LinearSVC</a>! Fitting on an arbitrary hyperparam value makes already shaky results even more shaky. It provides no reference frame, and with a different dataset etc, there is a greater chance that the gains you see will vanish.</li>
<li>Same goes for <a href="https://scikit-learn.org/stable/modules/calibration.html">model calibration</a>. Some query strategies rely on good probability estimates for the classifier’s predictions, and for that classifier needs to be properly calibrated.</li>
</ol>
</li>
<li>Related to the above point: I have heard arguments along the lines that performing model selection or calibration is time-consuming. First, that should not be an excuse to report volatile numbers. Second, we can’t solve problems we are not aware of, and if selection/calibration is a deal-breaker for AL research, let’s let that be widely known so that someone out there might take a crack at it. Maybe that will renew focus on efforts like the <em>Swiss Army Infinitesimal Jacknife</em> <a class="citation" href="#pmlr-v89-giordano19a">(Giordano et al., 2019)</a>.</li>
<li>
<p>Some AL algorithms have fine-tunable hyperparameters. These are impossible to use in practice. We are in a setup where labeled data is non-existent - what are these hyperparams supposed to be fine-tuned against? And remember that at each iteration you’re picking one batch of points, which implies the hyperparams are held fixed to some values at the iteration; so, over how many iterations should this fine-tuning occur, and how do we stabilize this process given the number of labeled data points at iterations differ? These questions are typically not addressed in the literature.</p>
<p>AL hyperparams are like <em>existence proofs</em> in mathematics - “we know for some value of these hyperparams our algorithm knocks it out of the park!” - as opposed to <em>constructive proofs</em> - “Ah! But we don’t know how to get to that value…”.</p>
</li>
<li>Lack of experiment standards: its hard to compare AL techniques across papers because there is no standard for setting batch or seed sizes or even the labeling budget (the final number of labeled points). These <strong>wildly</strong> vary in the literature (for an idea, take a look at Table 4 in the paper), and sadly, they heavily influence performance.</li>
<li>This caveat is for readers of AL literature. Be careful in interpreting reported results, especially the <a href="https://en.wikipedia.org/wiki/Wilcoxon_signed-rank_test">Wilcoxon signed-rank</a> statistical test. In AL, this is used to compare the held-out accuracies between a query strategy and random (or another query strategy) at various labeled-data sizes. You would want a small <em>p-value</em>, <strong>but</strong> that doesn’t tell you by <em>how much</em> a query strategy is better. Look at the legend in the plot below. Both strategies QS1 and QS2, when compared to random, lead to the same p-value. But you probably want QS2.</li>
</ol>
<!-- _includes/image.html -->
<div class="image-wrapper">
<img src="/assets/active_learning/wilcoxon_limitation.png" alt="test" />
<p class="image-caption">Different query strategies might lead to identical p-values on the Wilcoxon test.</p>
</div>
<p>I hope this post doesn’t convey the impression that I hate AL. But yes, it can be frustrating :-) I still think its a worthy problem, and I often read papers from the area. In fact, we have an ICML workshop paper involving AL from earlier <a class="citation" href="#XAI_human_in_the_loop">(Nguyen & Ghose, 2023)</a>. All we are saying is that it is time to scrutinize the various practical aspects of AL. Our paper is accompanied by the release of the library <a href="https://github.com/ThuongTNguyen/ALchemist">ALchemist</a> (still polishing up things) - which will hopefully make good benchmarking convenient.</p>
<h2 id="acknowledgements">Acknowledgements</h2>
<p>A lot of this research unfolded - sometimes at a glacial pace - over time. During that time, my co-author <a href="https://www.linkedin.com/in/emmathuongtn/">Emma T. Nguyen</a> and I greatly benefited from inputs from <a href="https://www.linkedin.com/in/sashankgva">Sashank Gummuluri</a>, <a href="https://www.linkedin.com/in/josh-selinger-b27a28151/">Joshua Selinger</a> and <a href="https://www.linkedin.com/in/mrmdesai/">Mandar Mutalikdesai</a>.</p>
<h2 id="references">References</h2>
<ol class="bibliography"><li><span id="10.1145/1964897.1964906">Attenberg, J., & Provost, F. (2011). <i>Inactive Learning? Difficulties Employing Active Learning in Practice</i>. <i>12</i>(2), 36–41. https://doi.org/10.1145/1964897.1964906</span>
<div class="dropDownAbstract" id="10.1145/1964897.1964906-abstract">
</div>
</li>
<li><span id="fragilityActive">Ghose, A., & Nguyen, E. T. (2024). On the Fragility of Active Learners for Text Classification. <i>The 2024 Conference on Empirical Methods in Natural Language Processing</i>. https://arxiv.org/abs/2403.15744</span>
<div class="dropDownAbstract" id="fragilityActive-abstract">
</div>
</li>
<li><span id="uncertainty_sampling">Lewis, D. D., & Gale, W. A. (1994). A Sequential Algorithm for Training Text Classifiers. <i>Proceedings of the 17th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval</i>, 3–12. http://dl.acm.org/citation.cfm?id=188490.188495</span>
<div class="dropDownAbstract" id="uncertainty_sampling-abstract">
</div>
</li>
<li><span id="10.1145/1390156.1390183">Dasgupta, S., & Hsu, D. (2008). Hierarchical sampling for active learning. <i>Proceedings of the 25th International Conference on Machine Learning</i>, 208–215. https://doi.org/10.1145/1390156.1390183</span>
<div class="dropDownAbstract" id="10.1145/1390156.1390183-abstract">
</div>
</li>
<li><span id="pmlr-v89-giordano19a">Giordano, R., Stephenson, W., Liu, R., Jordan, M., & Broderick, T. (2019). A Swiss Army Infinitesimal Jackknife. In K. Chaudhuri & M. Sugiyama (Eds.), <i>Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics</i> (Vol. 89, pp. 1139–1147). PMLR. https://proceedings.mlr.press/v89/giordano19a.html</span>
<div class="dropDownAbstract" id="pmlr-v89-giordano19a-abstract">
</div>
</li>
<li><span id="XAI_human_in_the_loop">Nguyen, E. T., & Ghose, A. (2023). Are Good Explainers Secretly Human-in-the-Loop Active Learners? <i>AI&HCI Workshop at the 40th International Conference on Machine Learning,
ICML</i>. https://arxiv.org/abs/2306.13935</span>
<div class="dropDownAbstract" id="XAI_human_in_the_loop-abstract">
</div>
</li></ol>
</div>
</article>
</div>
</main>
<!--
<script src="https://giscus.app/client.js"
data-repo="[ENTER REPO HERE]"
data-repo-id="[ENTER REPO ID HERE]"
data-category="[ENTER CATEGORY NAME HERE]"
data-category-id="[ENTER CATEGORY ID HERE]"
data-mapping="pathname"
data-reactions-enabled="1"
data-emit-metadata="0"
data-theme="light"
data-lang="en"
crossorigin="anonymous"
async>
</script>
-->
<footer class="site-footer">
<div class="wrapper">
<h2 class="footer-heading">A Not-So Primordial Soup</h2>
<div class="footer-col-wrapper">
<div class="footer-col footer-col-1">
<ul class="contact-list">
<li>
A Not-So Primordial Soup
</li>
</ul>
</div>
<div class="footer-col footer-col-2">
<ul class="social-media-list">
<li>
<a href="https://linkedin.com/in/abhishek-ghose-36197624">
<i class="fa fa-linkedin"></i> LinkedIn
</a>
</li>
<li>
<a href="https://www.quora.com/profile/Abhishek-Ghose">
<i class="fa fa-quora" aria-hidden="true"></i> Quora
</a>
</li>
</ul>
</div>
<div class="footer-col footer-col-3">
<p>My thought-recorder.</p>
</div>
</div>
</div>
</footer>
<script>
var elements = document.querySelectorAll('p');
Array.prototype.forEach.call(elements, function(el, i){
if(el.innerHTML=='[expand]') {
var parentcontent = el.parentNode.innerHTML.replace('<p>[expand]</p>','<div class="expand" style="display: none; height: 0; overflow: hidden;">').replace('<p>[/expand]</p>','</div>');
el.parentNode.innerHTML = parentcontent;
}
});
var elements = document.querySelectorAll('div.expand');
Array.prototype.forEach.call(elements, function(el, i){
el.previousElementSibling.innerHTML = el.previousElementSibling.innerHTML + '<span>.. <a href="#" style="cursor: pointer;" onclick="this.parentNode.parentNode.nextElementSibling.style.display = \'block\'; this.parentNode.parentNode.nextElementSibling.style.height = \'auto\'; this.parentNode.style.display = \'none\';">read more →</a></span>';
});
</script>
</body>
</html>