-
Notifications
You must be signed in to change notification settings - Fork 259
/
mitre-atlas-attack-pattern.json
1813 lines (1813 loc) · 85 KB
/
mitre-atlas-attack-pattern.json
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
{
"authors": [
"MITRE"
],
"category": "attack-pattern",
"description": "MITRE ATLAS Attack Pattern - Adversarial Threat Landscape for Artificial-Intelligence Systems",
"name": "MITRE ATLAS Attack Pattern",
"source": "https://github.com/mitre-atlas/atlas-navigator-data",
"type": "mitre-atlas-attack-pattern",
"uuid": "95e55c7e-68a9-453b-9677-020c8fc06333",
"values": [
{
"description": "Adversaries may search publicly available research to learn how and where machine learning is used within a victim organization.\nThe adversary can use this information to identify targets for attack, or to tailor an existing attack to make it more effective.\nOrganizations often use open source model architectures trained on additional proprietary data in production.\nKnowledge of this underlying architecture allows the adversary to craft more realistic proxy models ([Create Proxy ML Model](/techniques/AML.T0005)).\nAn adversary can search these resources for publications for authors employed at the victim organization.\n\nResearch materials may exist as academic papers published in [Journals and Conference Proceedings](/techniques/AML.T0000.000), or stored in [Pre-Print Repositories](/techniques/AML.T0000.001), as well as [Technical Blogs](/techniques/AML.T0000.002).\n",
"meta": {
"external_id": "AML.T0000",
"kill_chain": [
"mitre-atlas:reconnaissance"
],
"mitre_platforms": [
"ATLAS"
],
"refs": [
"https://atlas.mitre.org/techniques/AML.T0000"
]
},
"uuid": "65d21e6b-7abe-4623-8f5c-88011cb362cb",
"value": "Search for Victim's Publicly Available Research Materials"
},
{
"description": "Many of the publications accepted at premier machine learning conferences and journals come from commercial labs.\nSome journals and conferences are open access, others may require paying for access or a membership.\nThese publications will often describe in detail all aspects of a particular approach for reproducibility.\nThis information can be used by adversaries to implement the paper.\n",
"meta": {
"external_id": "AML.T0000.000",
"kill_chain": [
"mitre-atlas:reconnaissance"
],
"mitre_platforms": [
"ATLAS"
],
"refs": [
"https://atlas.mitre.org/techniques/AML.T0000.000"
]
},
"related": [
{
"dest-uuid": "65d21e6b-7abe-4623-8f5c-88011cb362cb",
"type": "subtechnique-of"
}
],
"uuid": "a17a1941-ca02-4273-9d7f-d864ea122bdb",
"value": "Journals and Conference Proceedings"
},
{
"description": "Pre-Print repositories, such as arXiv, contain the latest academic research papers that haven't been peer reviewed.\nThey may contain research notes, or technical reports that aren't typically published in journals or conference proceedings.\nPre-print repositories also serve as a central location to share papers that have been accepted to journals.\nSearching pre-print repositories provide adversaries with a relatively up-to-date view of what researchers in the victim organization are working on.\n",
"meta": {
"external_id": "AML.T0000.001",
"kill_chain": [
"mitre-atlas:reconnaissance"
],
"mitre_platforms": [
"ATLAS"
],
"refs": [
"https://atlas.mitre.org/techniques/AML.T0000.001"
]
},
"related": [
{
"dest-uuid": "65d21e6b-7abe-4623-8f5c-88011cb362cb",
"type": "subtechnique-of"
}
],
"uuid": "f09d9beb-4cb5-4094-83b6-e46bedc8a20e",
"value": "Pre-Print Repositories"
},
{
"description": "Research labs at academic institutions and Company R&D divisions often have blogs that highlight their use of machine learning and its application to the organizations unique problems.\nIndividual researchers also frequently document their work in blogposts.\nAn adversary may search for posts made by the target victim organization or its employees.\nIn comparison to [Journals and Conference Proceedings](/techniques/AML.T0000.000) and [Pre-Print Repositories](/techniques/AML.T0000.001) this material will often contain more practical aspects of the machine learning system.\nThis could include underlying technologies and frameworks used, and possibly some information about the API access and use case.\nThis will help the adversary better understand how that organization is using machine learning internally and the details of their approach that could aid in tailoring an attack.\n",
"meta": {
"external_id": "AML.T0000.002",
"kill_chain": [
"mitre-atlas:reconnaissance"
],
"mitre_platforms": [
"ATLAS"
],
"refs": [
"https://atlas.mitre.org/techniques/AML.T0000.002"
]
},
"related": [
{
"dest-uuid": "65d21e6b-7abe-4623-8f5c-88011cb362cb",
"type": "subtechnique-of"
}
],
"uuid": "b37a58fd-ee29-4f1d-92d8-3bfccf884e8b",
"value": "Technical Blogs"
},
{
"description": "Much like the [Search for Victim's Publicly Available Research Materials](/techniques/AML.T0000), there is often ample research available on the vulnerabilities of common models. Once a target has been identified, an adversary will likely try to identify any pre-existing work that has been done for this class of models.\nThis will include not only reading academic papers that may identify the particulars of a successful attack, but also identifying pre-existing implementations of those attacks. The adversary may obtain [Adversarial ML Attack Implementations](/techniques/AML.T0016.000) or develop their own [Adversarial ML Attacks](/techniques/AML.T0017.000) if necessary.",
"meta": {
"external_id": "AML.T0001",
"kill_chain": [
"mitre-atlas:reconnaissance"
],
"mitre_platforms": [
"ATLAS"
],
"refs": [
"https://atlas.mitre.org/techniques/AML.T0001"
]
},
"uuid": "8f510e67-2f0c-4642-9811-25c67643363c",
"value": "Search for Publicly Available Adversarial Vulnerability Analysis"
},
{
"description": "Adversaries may search public sources, including cloud storage, public-facing services, and software or data repositories, to identify machine learning artifacts.\nThese machine learning artifacts may include the software stack used to train and deploy models, training and testing data, model configurations and parameters.\nAn adversary will be particularly interested in artifacts hosted by or associated with the victim organization as they may represent what that organization uses in a production environment.\nAdversaries may identify artifact repositories via other resources associated with the victim organization (e.g. [Search Victim-Owned Websites](/techniques/AML.T0003) or [Search for Victim's Publicly Available Research Materials](/techniques/AML.T0000)).\nThese ML artifacts often provide adversaries with details of the ML task and approach.\n\nML artifacts can aid in an adversary's ability to [Create Proxy ML Model](/techniques/AML.T0005).\nIf these artifacts include pieces of the actual model in production, they can be used to directly [Craft Adversarial Data](/techniques/AML.T0043).\nAcquiring some artifacts requires registration (providing user details such email/name), AWS keys, or written requests, and may require the adversary to [Establish Accounts](/techniques/AML.T0021).\n\nArtifacts might be hosted on victim-controlled infrastructure, providing the victim with some information on who has accessed that data.\n",
"meta": {
"external_id": "AML.T0002",
"kill_chain": [
"mitre-atlas:resource-development"
],
"mitre_platforms": [
"ATLAS"
],
"refs": [
"https://atlas.mitre.org/techniques/AML.T0002"
]
},
"uuid": "aa17fe8d-62f8-4c4c-b7a2-6858c82dd84b",
"value": "Acquire Public ML Artifacts"
},
{
"description": "Adversaries may collect public datasets to use in their operations.\nDatasets used by the victim organization or datasets that are representative of the data used by the victim organization may be valuable to adversaries.\nDatasets can be stored in cloud storage, or on victim-owned websites.\nSome datasets require the adversary to [Establish Accounts](/techniques/AML.T0021) for access.\n\nAcquired datasets help the adversary advance their operations, stage attacks, and tailor attacks to the victim organization.\n",
"meta": {
"external_id": "AML.T0002.000",
"kill_chain": [
"mitre-atlas:resource-development"
],
"mitre_platforms": [
"ATLAS"
],
"refs": [
"https://atlas.mitre.org/techniques/AML.T0002.000"
]
},
"related": [
{
"dest-uuid": "aa17fe8d-62f8-4c4c-b7a2-6858c82dd84b",
"type": "subtechnique-of"
}
],
"uuid": "a3baff3d-7228-4ab7-ae00-ffe150e7ef8a",
"value": "Datasets"
},
{
"description": "Adversaries may acquire public models to use in their operations.\nAdversaries may seek models used by the victim organization or models that are representative of those used by the victim organization.\nRepresentative models may include model architectures, or pre-trained models which define the architecture as well as model parameters from training on a dataset.\nThe adversary may search public sources for common model architecture configuration file formats such as YAML or Python configuration files, and common model storage file formats such as ONNX (.onnx), HDF5 (.h5), Pickle (.pkl), PyTorch (.pth), or TensorFlow (.pb, .tflite).\n\nAcquired models are useful in advancing the adversary's operations and are frequently used to tailor attacks to the victim model.\n",
"meta": {
"external_id": "AML.T0002.001",
"kill_chain": [
"mitre-atlas:resource-development"
],
"mitre_platforms": [
"ATLAS"
],
"refs": [
"https://atlas.mitre.org/techniques/AML.T0002.001"
]
},
"related": [
{
"dest-uuid": "aa17fe8d-62f8-4c4c-b7a2-6858c82dd84b",
"type": "subtechnique-of"
}
],
"uuid": "c086784e-1494-4f75-a4a0-d3ad054b9428",
"value": "Models"
},
{
"description": "Adversaries may search websites owned by the victim for information that can be used during targeting.\nVictim-owned websites may contain technical details about their ML-enabled products or services.\nVictim-owned websites may contain a variety of details, including names of departments/divisions, physical locations, and data about key employees such as names, roles, and contact info.\nThese sites may also have details highlighting business operations and relationships.\n\nAdversaries may search victim-owned websites to gather actionable information.\nThis information may help adversaries tailor their attacks (e.g. [Adversarial ML Attacks](/techniques/AML.T0017.000) or [Manual Modification](/techniques/AML.T0043.003)).\nInformation from these sources may reveal opportunities for other forms of reconnaissance (e.g. [Search for Victim's Publicly Available Research Materials](/techniques/AML.T0000) or [Search for Publicly Available Adversarial Vulnerability Analysis](/techniques/AML.T0001))\n",
"meta": {
"external_id": "AML.T0003",
"kill_chain": [
"mitre-atlas:reconnaissance"
],
"mitre_platforms": [
"ATLAS"
],
"refs": [
"https://atlas.mitre.org/techniques/AML.T0003"
]
},
"uuid": "b23cda85-3457-406d-b043-24d2cf9e6fcf",
"value": "Search Victim-Owned Websites"
},
{
"description": "Adversaries may search open application repositories during targeting.\nExamples of these include Google Play, the iOS App store, the macOS App Store, and the Microsoft Store.\n\nAdversaries may craft search queries seeking applications that contain a ML-enabled components.\nFrequently, the next step is to [Acquire Public ML Artifacts](/techniques/AML.T0002).\n",
"meta": {
"external_id": "AML.T0004",
"kill_chain": [
"mitre-atlas:reconnaissance"
],
"mitre_platforms": [
"ATLAS"
],
"refs": [
"https://atlas.mitre.org/techniques/AML.T0004"
]
},
"uuid": "8c26f51a-c403-4c4d-852a-a1c56fe9e7cd",
"value": "Search Application Repositories"
},
{
"description": "Adversaries may obtain models to serve as proxies for the target model in use at the victim organization.\nProxy models are used to simulate complete access to the target model in a fully offline manner.\n\nAdversaries may train models from representative datasets, attempt to replicate models from victim inference APIs, or use available pre-trained models.\n",
"meta": {
"external_id": "AML.T0005",
"kill_chain": [
"mitre-atlas:ml-attack-staging"
],
"mitre_platforms": [
"ATLAS"
],
"refs": [
"https://atlas.mitre.org/techniques/AML.T0005"
]
},
"uuid": "c2bd321e-e196-4954-a8e9-c22f1793acc7",
"value": "Create Proxy ML Model"
},
{
"description": "Proxy models may be trained from ML artifacts (such as data, model architectures, and pre-trained models) that are representative of the target model gathered by the adversary.\nThis can be used to develop attacks that require higher levels of access than the adversary has available or as a means to validate pre-existing attacks without interacting with the target model.\n",
"meta": {
"external_id": "AML.T0005.000",
"kill_chain": [
"mitre-atlas:ml-attack-staging"
],
"mitre_platforms": [
"ATLAS"
],
"refs": [
"https://atlas.mitre.org/techniques/AML.T0005.000"
]
},
"related": [
{
"dest-uuid": "c2bd321e-e196-4954-a8e9-c22f1793acc7",
"type": "subtechnique-of"
}
],
"uuid": "75e15967-69df-4bdf-b662-979fb1e56c3e",
"value": "Train Proxy via Gathered ML Artifacts"
},
{
"description": "Adversaries may replicate a private model.\nBy repeatedly querying the victim's [AI Model Inference API Access](/techniques/AML.T0040), the adversary can collect the target model's inferences into a dataset.\nThe inferences are used as labels for training a separate model offline that will mimic the behavior and performance of the target model.\n\nA replicated model that closely mimic's the target model is a valuable resource in staging the attack.\nThe adversary can use the replicated model to [Craft Adversarial Data](/techniques/AML.T0043) for various purposes (e.g. [Evade ML Model](/techniques/AML.T0015), [Spamming ML System with Chaff Data](/techniques/AML.T0046)).\n",
"meta": {
"external_id": "AML.T0005.001",
"kill_chain": [
"mitre-atlas:ml-attack-staging"
],
"mitre_platforms": [
"ATLAS"
],
"refs": [
"https://atlas.mitre.org/techniques/AML.T0005.001"
]
},
"related": [
{
"dest-uuid": "c2bd321e-e196-4954-a8e9-c22f1793acc7",
"type": "subtechnique-of"
}
],
"uuid": "a3660a2d-f6e5-4f1b-9618-332cceb389c8",
"value": "Train Proxy via Replication"
},
{
"description": "Adversaries may use an off-the-shelf pre-trained model as a proxy for the victim model to aid in staging the attack.\n",
"meta": {
"external_id": "AML.T0005.002",
"kill_chain": [
"mitre-atlas:ml-attack-staging"
],
"mitre_platforms": [
"ATLAS"
],
"refs": [
"https://atlas.mitre.org/techniques/AML.T0005.002"
]
},
"related": [
{
"dest-uuid": "c2bd321e-e196-4954-a8e9-c22f1793acc7",
"type": "subtechnique-of"
}
],
"uuid": "ad290fa3-d87b-43d2-a547-bfa22387c132",
"value": "Use Pre-Trained Model"
},
{
"description": "An adversary may probe or scan the victim system to gather information for targeting.\nThis is distinct from other reconnaissance techniques that do not involve direct interaction with the victim system.\n",
"meta": {
"external_id": "AML.T0006",
"kill_chain": [
"mitre-atlas:reconnaissance"
],
"mitre_platforms": [
"ATLAS"
],
"refs": [
"https://atlas.mitre.org/techniques/AML.T0006"
]
},
"uuid": "79460396-01b4-4e91-8695-7d26df1abb95",
"value": "Active Scanning"
},
{
"description": "Adversaries may search private sources to identify machine learning artifacts that exist on the system and gather information about them.\nThese artifacts can include the software stack used to train and deploy models, training and testing data management systems, container registries, software repositories, and model zoos.\n\nThis information can be used to identify targets for further collection, exfiltration, or disruption, and to tailor and improve attacks.\n",
"meta": {
"external_id": "AML.T0007",
"kill_chain": [
"mitre-atlas:discovery"
],
"mitre_platforms": [
"ATLAS"
],
"refs": [
"https://atlas.mitre.org/techniques/AML.T0007"
]
},
"uuid": "6a88dccb-fb37-4f11-a5ad-42908aaee1d0",
"value": "Discover ML Artifacts"
},
{
"description": "Adversaries may buy, lease, or rent infrastructure for use throughout their operation.\nA wide variety of infrastructure exists for hosting and orchestrating adversary operations.\nInfrastructure solutions include physical or cloud servers, domains, mobile devices, and third-party web services.\nFree resources may also be used, but they are typically limited.\nInfrastructure can also include physical components such as countermeasures that degrade or disrupt AI components or sensors, including printed materials, wearables, or disguises.\n\nUse of these infrastructure solutions allows an adversary to stage, launch, and execute an operation.\nSolutions may help adversary operations blend in with traffic that is seen as normal, such as contact to third-party web services.\nDepending on the implementation, adversaries may use infrastructure that makes it difficult to physically tie back to them as well as utilize infrastructure that can be rapidly provisioned, modified, and shut down.",
"meta": {
"external_id": "AML.T0008",
"kill_chain": [
"mitre-atlas:resource-development"
],
"mitre_platforms": [
"ATLAS"
],
"refs": [
"https://atlas.mitre.org/techniques/AML.T0008"
]
},
"uuid": "01203e88-6c9a-4611-b278-7ba3c604a234",
"value": "Acquire Infrastructure"
},
{
"description": "Developing and staging machine learning attacks often requires expensive compute resources.\nAdversaries may need access to one or many GPUs in order to develop an attack.\nThey may try to anonymously use free resources such as Google Colaboratory, or cloud resources such as AWS, Azure, or Google Cloud as an efficient way to stand up temporary resources to conduct operations.\nMultiple workspaces may be used to avoid detection.\n",
"meta": {
"external_id": "AML.T0008.000",
"kill_chain": [
"mitre-atlas:resource-development"
],
"mitre_platforms": [
"ATLAS"
],
"refs": [
"https://atlas.mitre.org/techniques/AML.T0008.000"
]
},
"related": [
{
"dest-uuid": "01203e88-6c9a-4611-b278-7ba3c604a234",
"type": "subtechnique-of"
}
],
"uuid": "d65acc80-abf9-4147-a612-6536d31c5a91",
"value": "ML Development Workspaces"
},
{
"description": "Adversaries may acquire consumer hardware to conduct their attacks.\nOwning the hardware provides the adversary with complete control of the environment. These devices can be hard to trace.\n",
"meta": {
"external_id": "AML.T0008.001",
"kill_chain": [
"mitre-atlas:resource-development"
],
"mitre_platforms": [
"ATLAS"
],
"refs": [
"https://atlas.mitre.org/techniques/AML.T0008.001"
]
},
"related": [
{
"dest-uuid": "01203e88-6c9a-4611-b278-7ba3c604a234",
"type": "subtechnique-of"
}
],
"uuid": "c90d78ed-0f2f-41e9-b85f-1d13be7a40f6",
"value": "Consumer Hardware"
},
{
"description": "Adversaries may acquire domains that can be used during targeting. Domain names are the human readable names used to represent one or more IP addresses. They can be purchased or, in some cases, acquired for free.\n\nAdversaries may use acquired domains for a variety of purposes (see [ATT&CK](https://attack.mitre.org/techniques/T1583/001/)). Large AI datasets are often distributed as a list of URLs to individual datapoints. Adversaries may acquire expired domains that are included in these datasets and replace individual datapoints with poisoned examples ([Publish Poisoned Datasets](/techniques/AML.T0019)).",
"meta": {
"external_id": "AML.T0008.002",
"kill_chain": [
"mitre-atlas:discovery"
],
"mitre_platforms": [
"ATLAS"
],
"refs": [
"https://atlas.mitre.org/techniques/AML.T0008.002"
]
},
"related": [
{
"dest-uuid": "782c346d-9af5-4145-b6c6-b9cccdc2c950",
"type": "subtechnique-of"
}
],
"uuid": "dd88c52a-ec0f-42fa-8622-f992d6bcf2d5",
"value": "Domains"
},
{
"description": "Adversaries may acquire or manufacture physical countermeasures to aid or support their attack.\n\nThese components may be used to disrupt or degrade the model, such as adversarial patterns printed on stickers or T-shirts, disguises, or decoys. They may also be used to disrupt or degrade the sensors used in capturing data, such as laser pointers, light bulbs, or other tools.",
"meta": {
"external_id": "AML.T0008.003",
"kill_chain": [
"mitre-atlas:discovery"
],
"mitre_platforms": [
"ATLAS"
],
"refs": [
"https://atlas.mitre.org/techniques/AML.T0008.003"
]
},
"related": [
{
"dest-uuid": "782c346d-9af5-4145-b6c6-b9cccdc2c950",
"type": "subtechnique-of"
}
],
"uuid": "c2ca6b46-bcdf-45a8-b33d-3272c7a65cde",
"value": "Physical Countermeasures"
},
{
"description": "Adversaries may gain initial access to a system by compromising the unique portions of the ML supply chain.\nThis could include [Hardware](/techniques/AML.T0010.000), [Data](/techniques/AML.T0010.002) and its annotations, parts of the ML [ML Software](/techniques/AML.T0010.001) stack, or the [Model](/techniques/AML.T0010.003) itself.\nIn some instances the attacker will need secondary access to fully carry out an attack using compromised components of the supply chain.\n",
"meta": {
"external_id": "AML.T0010",
"kill_chain": [
"mitre-atlas:initial-access"
],
"mitre_platforms": [
"ATLAS"
],
"refs": [
"https://atlas.mitre.org/techniques/AML.T0010"
]
},
"uuid": "d2cf31e0-a550-4fe0-8fdb-8941b3ac00d9",
"value": "ML Supply Chain Compromise"
},
{
"description": "Adversaries may target AI systems by disrupting or manipulating the hardware supply chain. AI models often run on specialized hardware such as GPUs, TPUs, or embedded devices, but may also be optimized to operate on CPUs.",
"meta": {
"external_id": "AML.T0010.000",
"kill_chain": [
"mitre-atlas:initial-access"
],
"mitre_platforms": [
"ATLAS"
],
"refs": [
"https://atlas.mitre.org/techniques/AML.T0010.000"
]
},
"related": [
{
"dest-uuid": "d2cf31e0-a550-4fe0-8fdb-8941b3ac00d9",
"type": "subtechnique-of"
}
],
"uuid": "8dfc1d73-0de8-4daa-a8cf-83e019347395",
"value": "Hardware"
},
{
"description": "Most machine learning systems rely on a limited set of machine learning frameworks.\nAn adversary could get access to a large number of machine learning systems through a comprise of one of their supply chains.\nMany machine learning projects also rely on other open source implementations of various algorithms.\nThese can also be compromised in a targeted way to get access to specific systems.\n",
"meta": {
"external_id": "AML.T0010.001",
"kill_chain": [
"mitre-atlas:initial-access"
],
"mitre_platforms": [
"ATLAS"
],
"refs": [
"https://atlas.mitre.org/techniques/AML.T0010.001"
]
},
"related": [
{
"dest-uuid": "d2cf31e0-a550-4fe0-8fdb-8941b3ac00d9",
"type": "subtechnique-of"
}
],
"uuid": "d8292a1c-21e7-4b45-b110-0e05feb30a9a",
"value": "ML Software"
},
{
"description": "Data is a key vector of supply chain compromise for adversaries.\nEvery machine learning project will require some form of data.\nMany rely on large open source datasets that are publicly available.\nAn adversary could rely on compromising these sources of data.\nThe malicious data could be a result of [Poison Training Data](/techniques/AML.T0020) or include traditional malware.\n\nAn adversary can also target private datasets in the labeling phase.\nThe creation of private datasets will often require the hiring of outside labeling services.\nAn adversary can poison a dataset by modifying the labels being generated by the labeling service.\n",
"meta": {
"external_id": "AML.T0010.002",
"kill_chain": [
"mitre-atlas:initial-access"
],
"mitre_platforms": [
"ATLAS"
],
"refs": [
"https://atlas.mitre.org/techniques/AML.T0010.002"
]
},
"related": [
{
"dest-uuid": "d2cf31e0-a550-4fe0-8fdb-8941b3ac00d9",
"type": "subtechnique-of"
}
],
"uuid": "8d644240-ad99-4410-a7f8-3ef8f53a463e",
"value": "Data"
},
{
"description": "Machine learning systems often rely on open sourced models in various ways.\nMost commonly, the victim organization may be using these models for fine tuning.\nThese models will be downloaded from an external source and then used as the base for the model as it is tuned on a smaller, private dataset.\nLoading models often requires executing some saved code in the form of a saved model file.\nThese can be compromised with traditional malware, or through some adversarial machine learning techniques.\n",
"meta": {
"external_id": "AML.T0010.003",
"kill_chain": [
"mitre-atlas:initial-access"
],
"mitre_platforms": [
"ATLAS"
],
"refs": [
"https://atlas.mitre.org/techniques/AML.T0010.003"
]
},
"related": [
{
"dest-uuid": "d2cf31e0-a550-4fe0-8fdb-8941b3ac00d9",
"type": "subtechnique-of"
}
],
"uuid": "452b8fdf-8679-4013-bb38-4d16f65430bc",
"value": "Model"
},
{
"description": "An adversary may rely upon specific actions by a user in order to gain execution.\nUsers may inadvertently execute unsafe code introduced via [ML Supply Chain Compromise](/techniques/AML.T0010).\nUsers may be subjected to social engineering to get them to execute malicious code by, for example, opening a malicious document file or link.\n",
"meta": {
"external_id": "AML.T0011",
"kill_chain": [
"mitre-atlas:execution"
],
"mitre_platforms": [
"ATLAS"
],
"refs": [
"https://atlas.mitre.org/techniques/AML.T0011"
]
},
"uuid": "8c849dd4-5d15-45aa-b5b2-59c96a3ab939",
"value": "User Execution"
},
{
"description": "Adversaries may develop unsafe ML artifacts that when executed have a deleterious effect.\nThe adversary can use this technique to establish persistent access to systems.\nThese models may be introduced via a [ML Supply Chain Compromise](/techniques/AML.T0010).\n\nSerialization of models is a popular technique for model storage, transfer, and loading.\nHowever, this format without proper checking presents an opportunity for code execution.\n",
"meta": {
"external_id": "AML.T0011.000",
"kill_chain": [
"mitre-atlas:execution"
],
"mitre_platforms": [
"ATLAS"
],
"refs": [
"https://atlas.mitre.org/techniques/AML.T0011.000"
]
},
"related": [
{
"dest-uuid": "8c849dd4-5d15-45aa-b5b2-59c96a3ab939",
"type": "subtechnique-of"
}
],
"uuid": "be6ef5c5-1ecb-486d-9743-42085bd2c256",
"value": "Unsafe ML Artifacts"
},
{
"description": "Adversaries may develop malicious software packages that when imported by a user have a deleterious effect.\nMalicious packages may behave as expected to the user. They may be introduced via [ML Supply Chain Compromise](/techniques/AML.T0010). They may not present as obviously malicious to the user and may appear to be useful for an AI-related task.",
"meta": {
"external_id": "AML.T0011.001",
"kill_chain": [
"mitre-atlas:impact"
],
"mitre_platforms": [
"ATLAS"
],
"refs": [
"https://atlas.mitre.org/techniques/AML.T0011.001"
]
},
"related": [
{
"dest-uuid": "89731d07-679e-4da3-8f70-aba314068a89",
"type": "subtechnique-of"
}
],
"uuid": "7d76070d-2124-4ee6-913d-6015a697eaf6",
"value": "Malicious Package"
},
{
"description": "Adversaries may obtain and abuse credentials of existing accounts as a means of gaining Initial Access.\nCredentials may take the form of usernames and passwords of individual user accounts or API keys that provide access to various ML resources and services.\n\nCompromised credentials may provide access to additional ML artifacts and allow the adversary to perform [Discover ML Artifacts](/techniques/AML.T0007).\nCompromised credentials may also grant an adversary increased privileges such as write access to ML artifacts used during development or production.\n",
"meta": {
"external_id": "AML.T0012",
"kill_chain": [
"mitre-atlas:initial-access"
],
"mitre_platforms": [
"ATLAS"
],
"refs": [
"https://atlas.mitre.org/techniques/AML.T0012"
]
},
"uuid": "1b047901-cd87-4d1d-aa88-d7335855b65f",
"value": "Valid Accounts"
},
{
"description": "Adversaries may discover the ontology of a machine learning model's output space, for example, the types of objects a model can detect.\nThe adversary may discovery the ontology by repeated queries to the model, forcing it to enumerate its output space.\nOr the ontology may be discovered in a configuration file or in documentation about the model.\n\nThe model ontology helps the adversary understand how the model is being used by the victim.\nIt is useful to the adversary in creating targeted attacks.\n",
"meta": {
"external_id": "AML.T0013",
"kill_chain": [
"mitre-atlas:discovery"
],
"mitre_platforms": [
"ATLAS"
],
"refs": [
"https://atlas.mitre.org/techniques/AML.T0013"
]
},
"uuid": "943303ef-846b-49d6-b53f-b0b9341ac1ca",
"value": "Discover ML Model Ontology"
},
{
"description": "Adversaries may discover the general family of model.\nGeneral information about the model may be revealed in documentation, or the adversary may use carefully constructed examples and analyze the model's responses to categorize it.\n\nKnowledge of the model family can help the adversary identify means of attacking the model and help tailor the attack.\n",
"meta": {
"external_id": "AML.T0014",
"kill_chain": [
"mitre-atlas:discovery"
],
"mitre_platforms": [
"ATLAS"
],
"refs": [
"https://atlas.mitre.org/techniques/AML.T0014"
]
},
"uuid": "c552f0b5-2e2c-4f8f-badc-0876ecca7255",
"value": "Discover ML Model Family"
},
{
"description": "Adversaries can [Craft Adversarial Data](/techniques/AML.T0043) that prevent a machine learning model from correctly identifying the contents of the data.\nThis technique can be used to evade a downstream task where machine learning is utilized.\nThe adversary may evade machine learning based virus/malware detection, or network scanning towards the goal of a traditional cyber attack.\n",
"meta": {
"external_id": "AML.T0015",
"kill_chain": [
"mitre-atlas:initial-access",
"mitre-atlas:defense-evasion",
"mitre-atlas:impact"
],
"mitre_platforms": [
"ATLAS"
],
"refs": [
"https://atlas.mitre.org/techniques/AML.T0015"
]
},
"uuid": "071df654-813a-4708-85dc-f715f785d37f",
"value": "Evade ML Model"
},
{
"description": "Adversaries may search for and obtain software capabilities for use in their operations.\nCapabilities may be specific to ML-based attacks [Adversarial ML Attack Implementations](/techniques/AML.T0016.000) or generic software tools repurposed for malicious intent ([Software Tools](/techniques/AML.T0016.001)). In both instances, an adversary may modify or customize the capability to aid in targeting a particular ML system.",
"meta": {
"external_id": "AML.T0016",
"kill_chain": [
"mitre-atlas:resource-development"
],
"mitre_platforms": [
"ATLAS"
],
"refs": [
"https://atlas.mitre.org/techniques/AML.T0016"
]
},
"uuid": "db2b3112-a99b-45a0-be10-c69157b616f0",
"value": "Obtain Capabilities"
},
{
"description": "Adversaries may search for existing open source implementations of machine learning attacks. The research community often publishes their code for reproducibility and to further future research. Libraries intended for research purposes, such as CleverHans, the Adversarial Robustness Toolbox, and FoolBox, can be weaponized by an adversary. Adversaries may also obtain and use tools that were not originally designed for adversarial ML attacks as part of their attack.",
"meta": {
"external_id": "AML.T0016.000",
"kill_chain": [
"mitre-atlas:resource-development"
],
"mitre_platforms": [
"ATLAS"
],
"refs": [
"https://atlas.mitre.org/techniques/AML.T0016.000"
]
},
"related": [
{
"dest-uuid": "db2b3112-a99b-45a0-be10-c69157b616f0",
"type": "subtechnique-of"
}
],
"uuid": "3250c828-3852-4efb-857d-f7ca5c1a1ebc",
"value": "Adversarial ML Attack Implementations"
},
{
"description": "Adversaries may search for and obtain software tools to support their operations.\nSoftware designed for legitimate use may be repurposed by an adversary for malicious intent.\nAn adversary may modify or customize software tools to achieve their purpose.\nSoftware tools used to support attacks on ML systems are not necessarily ML-based themselves.\n",
"meta": {
"external_id": "AML.T0016.001",
"kill_chain": [
"mitre-atlas:resource-development"
],
"mitre_platforms": [
"ATLAS"
],
"refs": [
"https://atlas.mitre.org/techniques/AML.T0016.001"
]
},
"related": [
{
"dest-uuid": "db2b3112-a99b-45a0-be10-c69157b616f0",
"type": "subtechnique-of"
}
],
"uuid": "d18afb87-0de2-43dc-ab6a-eb914a7dbae7",
"value": "Software Tools"
},
{
"description": "Adversaries may develop their own capabilities to support operations. This process encompasses identifying requirements, building solutions, and deploying capabilities. Capabilities used to support attacks on ML systems are not necessarily ML-based themselves. Examples include setting up websites with adversarial information or creating Jupyter notebooks with obfuscated exfiltration code.",
"meta": {
"external_id": "AML.T0017",
"kill_chain": [
"mitre-atlas:resource-development"
],
"mitre_platforms": [
"ATLAS"
],
"refs": [
"https://atlas.mitre.org/techniques/AML.T0017"
]
},
"uuid": "c9153697-7d92-43aa-a16e-38436beff79d",
"value": "Develop Capabilities"
},
{
"description": "Adversaries may develop their own adversarial attacks.\nThey may leverage existing libraries as a starting point ([Adversarial ML Attack Implementations](/techniques/AML.T0016.000)).\nThey may implement ideas described in public research papers or develop custom made attacks for the victim model.\n",
"meta": {
"external_id": "AML.T0017.000",
"kill_chain": [
"mitre-atlas:resource-development"
],
"mitre_platforms": [
"ATLAS"
],
"refs": [
"https://atlas.mitre.org/techniques/AML.T0017.000"
]
},
"related": [
{
"dest-uuid": "c9153697-7d92-43aa-a16e-38436beff79d",
"type": "subtechnique-of"
}
],
"uuid": "4f0f548a-5f39-4dc7-b5e6-c84d824e39bd",
"value": "Adversarial ML Attacks"
},
{
"description": "Adversaries may introduce a backdoor into a ML model.\nA backdoored model operates performs as expected under typical conditions, but will produce the adversary's desired output when a trigger is introduced to the input data.\nA backdoored model provides the adversary with a persistent artifact on the victim system.\nThe embedded vulnerability is typically activated at a later time by data samples with an [Insert Backdoor Trigger](/techniques/AML.T0043.004)\n",
"meta": {
"external_id": "AML.T0018",
"kill_chain": [
"mitre-atlas:persistence",
"mitre-atlas:ml-attack-staging"
],
"mitre_platforms": [
"ATLAS"
],
"refs": [
"https://atlas.mitre.org/techniques/AML.T0018"
]
},
"uuid": "c704a49c-abf0-4258-9919-a862b1865469",
"value": "Backdoor ML Model"
},
{
"description": "Adversaries may introduce a backdoor by training the model poisoned data, or by interfering with its training process.\nThe model learns to associate an adversary-defined trigger with the adversary's desired output.\n",
"meta": {
"external_id": "AML.T0018.000",
"kill_chain": [
"mitre-atlas:persistence",
"mitre-atlas:ml-attack-staging"
],
"mitre_platforms": [
"ATLAS"
],
"refs": [
"https://atlas.mitre.org/techniques/AML.T0018.000"
]
},
"related": [
{
"dest-uuid": "c704a49c-abf0-4258-9919-a862b1865469",
"type": "subtechnique-of"
}
],
"uuid": "e0eb2b64-aebd-4412-80f3-b71d7805a65f",
"value": "Poison ML Model"
},
{
"description": "Adversaries may introduce a backdoor into a model by injecting a payload into the model file.\nThe payload detects the presence of the trigger and bypasses the model, instead producing the adversary's desired output.\n",
"meta": {
"external_id": "AML.T0018.001",
"kill_chain": [
"mitre-atlas:persistence",
"mitre-atlas:ml-attack-staging"
],
"mitre_platforms": [
"ATLAS"
],
"refs": [
"https://atlas.mitre.org/techniques/AML.T0018.001"
]
},
"related": [
{
"dest-uuid": "c704a49c-abf0-4258-9919-a862b1865469",
"type": "subtechnique-of"
}
],
"uuid": "a50f02df-1130-4945-94bb-7857952da585",
"value": "Inject Payload"
},
{
"description": "Adversaries may [Poison Training Data](/techniques/AML.T0020) and publish it to a public location.\nThe poisoned dataset may be a novel dataset or a poisoned variant of an existing open source dataset.\nThis data may be introduced to a victim system via [ML Supply Chain Compromise](/techniques/AML.T0010).\n",
"meta": {
"external_id": "AML.T0019",
"kill_chain": [
"mitre-atlas:resource-development"
],
"mitre_platforms": [
"ATLAS"
],
"refs": [
"https://atlas.mitre.org/techniques/AML.T0019"
]
},
"uuid": "f4fc2abd-71a4-401a-a742-18fc5aeb4bc3",
"value": "Publish Poisoned Datasets"
},
{
"description": "Adversaries may attempt to poison datasets used by a ML model by modifying the underlying data or its labels.\nThis allows the adversary to embed vulnerabilities in ML models trained on the data that may not be easily detectable.\nData poisoning attacks may or may not require modifying the labels.\nThe embedded vulnerability is activated at a later time by data samples with an [Insert Backdoor Trigger](/techniques/AML.T0043.004)\n\nPoisoned data can be introduced via [ML Supply Chain Compromise](/techniques/AML.T0010) or the data may be poisoned after the adversary gains [Initial Access](/tactics/AML.TA0004) to the system.\n",
"meta": {
"external_id": "AML.T0020",
"kill_chain": [
"mitre-atlas:resource-development",
"mitre-atlas:persistence"
],
"mitre_platforms": [
"ATLAS"
],
"refs": [
"https://atlas.mitre.org/techniques/AML.T0020"
]
},
"uuid": "0ec538ca-589b-4e42-bcaa-06097a0d679f",
"value": "Poison Training Data"
},
{
"description": "Adversaries may create accounts with various services for use in targeting, to gain access to resources needed in [ML Attack Staging](/tactics/AML.TA0001), or for victim impersonation.\n",
"meta": {
"external_id": "AML.T0021",
"kill_chain": [
"mitre-atlas:resource-development"
],
"mitre_platforms": [
"ATLAS"
],
"refs": [
"https://atlas.mitre.org/techniques/AML.T0021"
]
},
"uuid": "aaa79096-814f-4fb0-a553-1701b2765317",
"value": "Establish Accounts"
},
{
"description": "Adversaries may exfiltrate private information via [AI Model Inference API Access](/techniques/AML.T0040).\nML Models have been shown leak private information about their training data (e.g. [Infer Training Data Membership](/techniques/AML.T0024.000), [Invert ML Model](/techniques/AML.T0024.001)).\nThe model itself may also be extracted ([Extract ML Model](/techniques/AML.T0024.002)) for the purposes of [ML Intellectual Property Theft](/techniques/AML.T0048.004).\n\nExfiltration of information relating to private training data raises privacy concerns.\nPrivate training data may include personally identifiable information, or other protected data.\n",
"meta": {
"external_id": "AML.T0024",
"kill_chain": [
"mitre-atlas:exfiltration"
],
"mitre_platforms": [
"ATLAS"
],
"refs": [
"https://atlas.mitre.org/techniques/AML.T0024"
]
},
"uuid": "b07d147f-51c8-4eb6-9a05-09c86762a9c1",
"value": "Exfiltration via ML Inference API"
},
{
"description": "Adversaries may infer the membership of a data sample in its training set, which raises privacy concerns.\nSome strategies make use of a shadow model that could be obtained via [Train Proxy via Replication](/techniques/AML.T0005.001), others use statistics of model prediction scores.\n\nThis can cause the victim model to leak private information, such as PII of those in the training set or other forms of protected IP.\n",
"meta": {
"external_id": "AML.T0024.000",
"kill_chain": [
"mitre-atlas:exfiltration"
],
"mitre_platforms": [
"ATLAS"
],
"refs": [
"https://atlas.mitre.org/techniques/AML.T0024.000"
]
},
"related": [
{
"dest-uuid": "b07d147f-51c8-4eb6-9a05-09c86762a9c1",
"type": "subtechnique-of"
}
],
"uuid": "86b5f486-afb8-4aa9-991f-0e24d5737f0c",
"value": "Infer Training Data Membership"
},
{
"description": "Machine learning models' training data could be reconstructed by exploiting the confidence scores that are available via an inference API.\nBy querying the inference API strategically, adversaries can back out potentially private information embedded within the training data.\nThis could lead to privacy violations if the attacker can reconstruct the data of sensitive features used in the algorithm.\n",
"meta": {
"external_id": "AML.T0024.001",
"kill_chain": [
"mitre-atlas:exfiltration"
],
"mitre_platforms": [
"ATLAS"
],
"refs": [
"https://atlas.mitre.org/techniques/AML.T0024.001"
]
},
"related": [
{
"dest-uuid": "b07d147f-51c8-4eb6-9a05-09c86762a9c1",
"type": "subtechnique-of"
}
],
"uuid": "e19c6f8a-f1e2-46cc-9387-03a3092f01ed",
"value": "Invert ML Model"
},
{
"description": "Adversaries may extract a functional copy of a private model.\nBy repeatedly querying the victim's [AI Model Inference API Access](/techniques/AML.T0040), the adversary can collect the target model's inferences into a dataset.\nThe inferences are used as labels for training a separate model offline that will mimic the behavior and performance of the target model.\n\nAdversaries may extract the model to avoid paying per query in a machine learning as a service setting.\nModel extraction is used for [ML Intellectual Property Theft](/techniques/AML.T0048.004).\n",
"meta": {
"external_id": "AML.T0024.002",
"kill_chain": [
"mitre-atlas:exfiltration"
],
"mitre_platforms": [
"ATLAS"
],
"refs": [
"https://atlas.mitre.org/techniques/AML.T0024.002"
]
},
"related": [
{
"dest-uuid": "b07d147f-51c8-4eb6-9a05-09c86762a9c1",
"type": "subtechnique-of"
}
],
"uuid": "f78e0ac3-6d72-42ed-b20a-e10d8c752cf6",
"value": "Extract ML Model"
},
{
"description": "Adversaries may exfiltrate ML artifacts or other information relevant to their goals via traditional cyber means.\n\nSee the ATT&CK [Exfiltration](https://attack.mitre.org/tactics/TA0010/) tactic for more information.\n",
"meta": {
"external_id": "AML.T0025",
"kill_chain": [
"mitre-atlas:exfiltration"
],
"mitre_platforms": [
"ATLAS"
],
"refs": [