-
Notifications
You must be signed in to change notification settings - Fork 1
/
index.qmd
539 lines (373 loc) · 13.4 KB
/
index.qmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
---
format:
html:
toc: true
execute:
eval: true
freeze: true
---
```{r}
#| include: false
library(dplyr)
library(dbplyr)
library(tictoc)
library(DBI)
source("site/knitr-print.R")
mall::llm_use("ollama", "llama3.2", seed = 100, .cache = "_readme_cache")
```
<img src="site/images/favicon/apple-touch-icon-180x180.png" style="float:right" />
<!-- badges: start -->
[![R check](https://github.com/mlverse/mall/actions/workflows/R-CMD-check.yaml/badge.svg)](https://github.com/mlverse/mall/actions/workflows/R-CMD-check.yaml)
[![Python tests](https://github.com/mlverse/mall/actions/workflows/python-tests.yaml/badge.svg)](https://github.com/mlverse/mall/actions/workflows/python-tests.yaml)
[![R package coverage](https://codecov.io/gh/mlverse/mall/branch/main/graph/badge.svg)](https://app.codecov.io/gh/mlverse/mall?branch=main)
[![Lifecycle: experimental](https://img.shields.io/badge/lifecycle-experimental-orange.svg)](https://lifecycle.r-lib.org/articles/stages.html#experimental)
<!-- badges: end -->
Run multiple LLM predictions against a data frame. The predictions are processed
row-wise over a specified column. It works using a pre-determined one-shot prompt,
along with the current row's content. `mall` has been implemented for both R
and Python. `mall` will use the appropriate prompt for the requested analysis.
Currently, the included prompts perform the following:
- [Sentiment analysis](#sentiment)
- [Text summarizing](#summarize)
- [Classify text](#classify)
- [Extract one, or several](#extract), specific pieces information from the text
- [Translate text](#translate)
- [Verify that something it true](#verify) about the text (binary)
- [Custom prompt](#custom-prompt)
This package is inspired by the SQL AI functions now offered by vendors such as
[Databricks](https://docs.databricks.com/en/large-language-models/ai-functions.html)
and Snowflake. `mall` uses [Ollama](https://ollama.com/) to interact with LLMs
installed locally.
<img src="site/images/dplyr.png" style="float:left;margin-right:33px;margin-top:20px;" width="100" />
For **R**, that interaction takes place via the
[`ollamar`](https://hauselin.github.io/ollama-r/) package. The functions are
designed to easily work with piped commands, such as `dplyr`.
```r
reviews |>
llm_sentiment(review)
```
<img src="site/images/polars.png" style="float:left;margin-right:10px;" width="120" />
For **Python**, `mall` is a library extension to [Polars](https://pola.rs/). To
interact with Ollama, it uses the official
[Python library](https://github.com/ollama/ollama-python).
```python
reviews.llm.sentiment("review")
```
## Motivation
We want to new find new ways to help data scientists use LLMs in their daily work.
Unlike the familiar interfaces, such as chatting and code completion, this interface
runs your text data directly against the LLM.
The LLM's flexibility, allows for it to adapt to the subject of your data, and
provide surprisingly accurate predictions. This saves the data scientist the
need to write and tune an NLP model.
In recent times, the capabilities of LLMs that can run locally in your computer
have increased dramatically. This means that these sort of analysis can run
in your machine with good accuracy. Additionally, it makes it possible to take
advantage of LLM's at your institution, since the data will not leave the
corporate network.
## Get started
- Install `mall`
::: {.panel-tabset group="language"}
## R
From CRAN:
```r
install.packages("mall")
```
From GitHub:
```r
pak::pak("mlverse/mall/r")
```
## Python
From PyPi:
```python
pip install mlverse-mall
```
From GitHub:
```python
pip install "mall @ git+https://git@github.com/mlverse/mall.git#subdirectory=python"
```
:::
- [Download Ollama from the official website](https://ollama.com/download)
- Install and start Ollama in your computer
::: {.panel-tabset group="language"}
## R
- Install Ollama in your machine. The `ollamar` package's website provides this
[Installation guide](https://hauselin.github.io/ollama-r/#installation)
- Download an LLM model. For example, I have been developing this package using
Llama 3.2 to test. To get that model you can run:
```r
ollamar::pull("llama3.2")
```
## Python
- Install the official Ollama library
```python
pip install ollama
```
- Download an LLM model. For example, I have been developing this package using
Llama 3.2 to test. To get that model you can run:
```python
import ollama
ollama.pull('llama3.2')
```
:::
#### With Databricks (R only)
If you pass a table connected to **Databricks** via `odbc`, `mall` will
automatically use Databricks' LLM instead of Ollama. *You won't need Ollama
installed if you are using Databricks only.*
`mall` will call the appropriate SQL AI function. For more information see our
[Databricks article.](https://mlverse.github.io/mall/articles/databricks.html)
## LLM functions
We will start with loading a very small data set contained in `mall`. It has
3 product reviews that we will use as the source of our examples.
::: {.panel-tabset group="language"}
## R
```{r}
library(mall)
data("reviews")
reviews
```
## Python
```{python}
#| include: false
import polars as pl
pl.Config(fmt_str_lengths=100)
pl.Config.set_tbl_hide_dataframe_shape(True)
pl.Config.set_tbl_hide_column_data_types(True)
```
```{python}
import mall
data = mall.MallData
reviews = data.reviews
reviews
```
:::
```{python}
#| include: false
reviews.llm.use(options = dict(seed = 100), _cache = "_readme_cache")
```
### Sentiment
Automatically returns "positive", "negative", or "neutral" based on the text.
::: {.panel-tabset group="language"}
## R
```{r}
reviews |>
llm_sentiment(review)
```
For more information and examples visit this function's
[R reference page](reference/llm_sentiment.qmd)
## Python
```{python}
reviews.llm.sentiment("review")
```
For more information and examples visit this function's
[Python reference page](reference/MallFrame.qmd#mall.MallFrame.sentiment)
:::
### Summarize
There may be a need to reduce the number of words in a given text. Typically to
make it easier to understand its intent. The function has an argument to
control the maximum number of words to output
(`max_words`):
::: {.panel-tabset group="language"}
## R
```{r}
reviews |>
llm_summarize(review, max_words = 5)
```
For more information and examples visit this function's
[R reference page](reference/llm_summarize.qmd)
## Python
```{python}
reviews.llm.summarize("review", 5)
```
For more information and examples visit this function's
[Python reference page](reference/MallFrame.qmd#mall.MallFrame.summarize)
:::
### Classify
Use the LLM to categorize the text into one of the options you provide:
::: {.panel-tabset group="language"}
## R
```{r}
reviews |>
llm_classify(review, c("appliance", "computer"))
```
For more information and examples visit this function's
[R reference page](reference/llm_classify.qmd)
## Python
```{python}
reviews.llm.classify("review", ["computer", "appliance"])
```
For more information and examples visit this function's
[Python reference page](reference/MallFrame.qmd#mall.MallFrame.classify)
:::
### Extract
One of the most interesting use cases Using natural language, we can tell the
LLM to return a specific part of the text. In the following example, we request
that the LLM return the product being referred to. We do this by simply saying
"product". The LLM understands what we *mean* by that word, and looks for that
in the text.
::: {.panel-tabset group="language"}
## R
```{r}
reviews |>
llm_extract(review, "product")
```
For more information and examples visit this function's
[R reference page](reference/llm_extract.qmd)
## Python
```{python}
reviews.llm.extract("review", "product")
```
For more information and examples visit this function's
[Python reference page](reference/MallFrame.qmd#mall.MallFrame.extract)
:::
### Verify
This functions allows you to check and see if a statement is true, based
on the provided text. By default, it will return a 1 for "yes", and 0 for
"no". This can be customized.
::: {.panel-tabset group="language"}
## R
```{r}
reviews |>
llm_verify(review, "is the customer happy with the purchase")
```
For more information and examples visit this function's
[R reference page](reference/llm_verify.qmd)
## Python
```{python}
reviews.llm.verify("review", "is the customer happy with the purchase")
```
For more information and examples visit this function's
[Python reference page](reference/MallFrame.qmd#mall.MallFrame.verify)
:::
### Translate
As the title implies, this function will translate the text into a specified
language. What is really nice, it is that you don't need to specify the language
of the source text. Only the target language needs to be defined. The translation
accuracy will depend on the LLM
::: {.panel-tabset group="language"}
## R
```{r}
reviews |>
llm_translate(review, "spanish")
```
For more information and examples visit this function's
[R reference page](reference/llm_translate.qmd)
## Python
```{python}
reviews.llm.translate("review", "spanish")
```
For more information and examples visit this function's
[Python reference page](reference/MallFrame.qmd#mall.MallFrame.translate)
:::
### Custom prompt
It is possible to pass your own prompt to the LLM, and have `mall` run it
against each text entry:
::: {.panel-tabset group="language"}
## R
```{r}
my_prompt <- paste(
"Answer a question.",
"Return only the answer, no explanation",
"Acceptable answers are 'yes', 'no'",
"Answer this about the following text, is this a happy customer?:"
)
reviews |>
llm_custom(review, my_prompt)
```
For more information and examples visit this function's
[R reference page](reference/llm_custom.qmd)
## Python
```{python}
my_prompt = (
"Answer a question."
"Return only the answer, no explanation"
"Acceptable answers are 'yes', 'no'"
"Answer this about the following text, is this a happy customer?:"
)
reviews.llm.custom("review", prompt = my_prompt)
```
For more information and examples visit this function's
[Python reference page](reference/MallFrame.qmd#mall.MallFrame.custom)
:::
## Model selection and settings
You can set the model and its options to use when calling the LLM. In this case,
we refer to options as model specific things that can be set, such as seed or
temperature.
::: {.panel-tabset group="language"}
## R
Invoking an `llm` function will automatically initialize a model selection
if you don't have one selected yet. If there is only one option, it will
pre-select it for you. If there are more than one available models, then `mall`
will present you as menu selection so you can select which model you wish to
use.
Calling `llm_use()` directly will let you specify the model and backend to use.
You can also setup additional arguments that will be passed down to the
function that actually runs the prediction. In the case of Ollama, that function
is [`chat()`](https://hauselin.github.io/ollama-r/reference/chat.html).
The model to use, and other options can be set for the current R session
```{r}
#| eval: false
llm_use("ollama", "llama3.2", seed = 100, temperature = 0)
```
## Python
The model and options to be used will be defined at the Polars data frame
object level. If not passed, the default model will be **llama3.2**.
```{python}
#| eval: false
reviews.llm.use("ollama", "llama3.2", options = dict(seed = 100))
```
:::
#### Results caching
By default `mall` caches the requests and corresponding results from a given
LLM run. Each response is saved as individual JSON files. By default, the folder
name is `_mall_cache`. The folder name can be customized, if needed. Also, the
caching can be turned off by setting the argument to empty (`""`).
::: {.panel-tabset group="language"}
## R
```{r}
#| eval: false
llm_use(.cache = "_my_cache")
```
To turn off:
```{r}
#| eval: false
llm_use(.cache = "")
```
## Python
```{python}
#| eval: false
reviews.llm.use(_cache = "my_cache")
```
To turn off:
```{python}
#| eval: false
reviews.llm.use(_cache = "")
```
:::
For more information see the [Caching Results](articles/caching.qmd) article.
## Key considerations
The main consideration is **cost**. Either, time cost, or money cost.
If using this method with an LLM locally available, the cost will be a long
running time. Unless using a very specialized LLM, a given LLM is a general model.
It was fitted using a vast amount of data. So determining a response for each
row, takes longer than if using a manually created NLP model. The default model
used in Ollama is [Llama 3.2](https://ollama.com/library/llama3.2),
which was fitted using 3B parameters.
If using an external LLM service, the consideration will need to be for the
billing costs of using such service. Keep in mind that you will be sending a lot
of data to be evaluated.
Another consideration is the novelty of this approach. Early tests are
providing encouraging results. But you, as an user, will still need to keep
in mind that the predictions will not be infallible, so always check the output.
At this time, I think the best use for this method, is for a quick analysis.
## Vector functions (R only)
`mall` includes functions that expect a vector, instead of a table, to run the
predictions. This should make it easier to test things, such as custom prompts
or results of specific text. Each `llm_` function has a corresponding `llm_vec_`
function:
```{r}
llm_vec_sentiment("I am happy")
```
```{r}
llm_vec_translate("Este es el mejor dia!", "english")
```