diff --git a/README.asciidoc b/README.asciidoc index bac6d0ed71752..124e7c23d8373 100644 --- a/README.asciidoc +++ b/README.asciidoc @@ -4,7 +4,7 @@ Elasticsearch is a distributed search and analytics engine, scalable data store Use cases enabled by Elasticsearch include: -* https://www.elastic.co/search-labs/blog/articles/retrieval-augmented-generation-rag[Retrieval Augmented Generation (RAG)] +* https://www.elastic.co/search-labs/blog/articles/retrieval-augmented-generation-rag[Retrieval Augmented Generation (RAG)] * https://www.elastic.co/search-labs/blog/categories/vector-search[Vector search] * Full-text search * Logs @@ -17,7 +17,7 @@ Use cases enabled by Elasticsearch include: To learn more about Elasticsearch's features and capabilities, see our https://www.elastic.co/products/elasticsearch[product page]. -To access information on https://www.elastic.co/search-labs/blog/categories/ml-research[machine learning innovations] and the latest https://www.elastic.co/search-labs/blog/categories/lucene[Lucene contributions from Elastic], more information can be found in https://www.elastic.co/search-labs[Search Labs]. +To access information on https://www.elastic.co/search-labs/blog/categories/ml-research[machine learning innovations] and the latest https://www.elastic.co/search-labs/blog/categories/lucene[Lucene contributions from Elastic], more information can be found in https://www.elastic.co/search-labs[Search Labs]. [[get-started]] == Get started @@ -27,20 +27,20 @@ https://www.elastic.co/cloud/as-a-service[Elasticsearch Service on Elastic Cloud]. If you prefer to install and manage Elasticsearch yourself, you can download -the latest version from +the latest version from https://www.elastic.co/downloads/elasticsearch[elastic.co/downloads/elasticsearch]. === Run Elasticsearch locally -//// +//// IMPORTANT: This content is replicated in the Elasticsearch repo. See `run-elasticsearch-locally.asciidoc`. Ensure both files are in sync. https://github.com/elastic/start-local is the source of truth. -//// +//// [WARNING] -==== +==== DO NOT USE THESE INSTRUCTIONS FOR PRODUCTION DEPLOYMENTS. This setup is intended for local development and testing only. @@ -93,7 +93,7 @@ Use this key to connect to Elasticsearch with a https://www.elastic.co/guide/en/ From the `elastic-start-local` folder, check the connection to Elasticsearch using `curl`: [source,sh] ----- +---- source .env curl $ES_LOCAL_URL -H "Authorization: ApiKey ${ES_LOCAL_API_KEY}" ---- @@ -101,12 +101,12 @@ curl $ES_LOCAL_URL -H "Authorization: ApiKey ${ES_LOCAL_API_KEY}" === Send requests to Elasticsearch -You send data and other requests to Elasticsearch through REST APIs. -You can interact with Elasticsearch using any client that sends HTTP requests, +You send data and other requests to Elasticsearch through REST APIs. +You can interact with Elasticsearch using any client that sends HTTP requests, such as the https://www.elastic.co/guide/en/elasticsearch/client/index.html[Elasticsearch -language clients] and https://curl.se[curl]. +language clients] and https://curl.se[curl]. -==== Using curl +==== Using curl Here's an example curl command to create a new Elasticsearch index, using basic auth: @@ -149,19 +149,19 @@ print(client.info()) ==== Using the Dev Tools Console -Kibana's developer console provides an easy way to experiment and test requests. +Kibana's developer console provides an easy way to experiment and test requests. To access the console, open Kibana, then go to **Management** > **Dev Tools**. **Add data** -You index data into Elasticsearch by sending JSON objects (documents) through the REST APIs. -Whether you have structured or unstructured text, numerical data, or geospatial data, -Elasticsearch efficiently stores and indexes it in a way that supports fast searches. +You index data into Elasticsearch by sending JSON objects (documents) through the REST APIs. +Whether you have structured or unstructured text, numerical data, or geospatial data, +Elasticsearch efficiently stores and indexes it in a way that supports fast searches. For timestamped data such as logs and metrics, you typically add documents to a data stream made up of multiple auto-generated backing indices. -To add a single document to an index, submit an HTTP post request that targets the index. +To add a single document to an index, submit an HTTP post request that targets the index. ---- POST /customer/_doc/1 @@ -171,11 +171,11 @@ POST /customer/_doc/1 } ---- -This request automatically creates the `customer` index if it doesn't exist, -adds a new document that has an ID of 1, and +This request automatically creates the `customer` index if it doesn't exist, +adds a new document that has an ID of 1, and stores and indexes the `firstname` and `lastname` fields. -The new document is available immediately from any node in the cluster. +The new document is available immediately from any node in the cluster. You can retrieve it with a GET request that specifies its document ID: ---- @@ -183,7 +183,7 @@ GET /customer/_doc/1 ---- To add multiple documents in one request, use the `_bulk` API. -Bulk data must be newline-delimited JSON (NDJSON). +Bulk data must be newline-delimited JSON (NDJSON). Each line must end in a newline character (`\n`), including the last line. ---- @@ -200,15 +200,15 @@ PUT customer/_bulk **Search** -Indexed documents are available for search in near real-time. -The following search matches all customers with a first name of _Jennifer_ +Indexed documents are available for search in near real-time. +The following search matches all customers with a first name of _Jennifer_ in the `customer` index. ---- GET customer/_search { "query" : { - "match" : { "firstname": "Jennifer" } + "match" : { "firstname": "Jennifer" } } } ---- @@ -223,9 +223,9 @@ data streams, or index aliases. . Go to **Management > Stack Management > Kibana > Data Views**. . Select **Create data view**. -. Enter a name for the data view and a pattern that matches one or more indices, -such as _customer_. -. Select **Save data view to Kibana**. +. Enter a name for the data view and a pattern that matches one or more indices, +such as _customer_. +. Select **Save data view to Kibana**. To start exploring, go to **Analytics > Discover**. @@ -256,7 +256,7 @@ To build a distribution for another platform, run the related command: To build distributions for all supported platforms, run: ---- -./gradlew assemble +./gradlew assemblePublicArtifacts ---- Distributions are output to `distribution/archives`. @@ -281,7 +281,7 @@ The https://github.com/elastic/elasticsearch-labs[`elasticsearch-labs`] repo con [[contribute]] == Contribute -For contribution guidelines, see xref:CONTRIBUTING.md[CONTRIBUTING]. +For contribution guidelines, see xref:CONTRIBUTING.md[CONTRIBUTING]. [[questions]] == Questions? Problems? Suggestions? diff --git a/build.gradle b/build.gradle index 2ef0511b2be88..98a7f7ed08b21 100644 --- a/build.gradle +++ b/build.gradle @@ -13,14 +13,13 @@ import com.avast.gradle.dockercompose.tasks.ComposePull import com.fasterxml.jackson.databind.JsonNode import com.fasterxml.jackson.databind.ObjectMapper +import org.elasticsearch.gradle.DistributionDownloadPlugin import org.elasticsearch.gradle.Version import org.elasticsearch.gradle.internal.BaseInternalPluginBuildPlugin import org.elasticsearch.gradle.internal.ResolveAllDependencies import org.elasticsearch.gradle.internal.info.BuildParams import org.elasticsearch.gradle.util.GradleUtils import org.gradle.plugins.ide.eclipse.model.AccessRule -import org.gradle.plugins.ide.eclipse.model.ProjectDependency -import org.elasticsearch.gradle.DistributionDownloadPlugin import java.nio.file.Files @@ -89,7 +88,7 @@ class ListExpansion { // Filters out intermediate patch releases to reduce the load of CI testing def filterIntermediatePatches = { List versions -> - versions.groupBy {"${it.major}.${it.minor}"}.values().collect {it.max()} + versions.groupBy { "${it.major}.${it.minor}" }.values().collect { it.max() } } tasks.register("updateCIBwcVersions") { @@ -101,7 +100,10 @@ tasks.register("updateCIBwcVersions") { } } - def writeBuildkitePipeline = { String outputFilePath, String pipelineTemplatePath, List listExpansions, List stepExpansions = [] -> + def writeBuildkitePipeline = { String outputFilePath, + String pipelineTemplatePath, + List listExpansions, + List stepExpansions = [] -> def outputFile = file(outputFilePath) def pipelineTemplate = file(pipelineTemplatePath) @@ -132,7 +134,12 @@ tasks.register("updateCIBwcVersions") { // Writes a Buildkite pipeline from a template, and replaces $BWC_STEPS with a list of steps, one for each version // Useful when you need to configure more versions than are allowed in a matrix configuration def expandBwcSteps = { String outputFilePath, String pipelineTemplatePath, String stepTemplatePath, List versions -> - writeBuildkitePipeline(outputFilePath, pipelineTemplatePath, [], [new StepExpansion(templatePath: stepTemplatePath, versions: versions, variable: "BWC_STEPS")]) + writeBuildkitePipeline( + outputFilePath, + pipelineTemplatePath, + [], + [new StepExpansion(templatePath: stepTemplatePath, versions: versions, variable: "BWC_STEPS")] + ) } doLast { @@ -150,7 +157,11 @@ tasks.register("updateCIBwcVersions") { new ListExpansion(versions: filterIntermediatePatches(BuildParams.bwcVersions.unreleasedIndexCompatible), variable: "BWC_LIST"), ], [ - new StepExpansion(templatePath: ".buildkite/pipelines/periodic.bwc.template.yml", versions: filterIntermediatePatches(BuildParams.bwcVersions.indexCompatible), variable: "BWC_STEPS"), + new StepExpansion( + templatePath: ".buildkite/pipelines/periodic.bwc.template.yml", + versions: filterIntermediatePatches(BuildParams.bwcVersions.indexCompatible), + variable: "BWC_STEPS" + ), ] ) @@ -302,7 +313,7 @@ allprojects { if (project.path.startsWith(":x-pack:")) { if (project.path.contains("security") || project.path.contains(":ml")) { tasks.register('checkPart4') { dependsOn 'check' } - } else if (project.path == ":x-pack:plugin" || project.path.contains("ql") || project.path.contains("smoke-test")) { + } else if (project.path == ":x-pack:plugin" || project.path.contains("ql") || project.path.contains("smoke-test")) { tasks.register('checkPart3') { dependsOn 'check' } } else if (project.path.contains("multi-node")) { tasks.register('checkPart5') { dependsOn 'check' } @@ -453,6 +464,35 @@ tasks.register("buildReleaseArtifacts").configure { .findAll { it != null } } +/** + * This tasks allows third party open source user to build all artifacts + * that are build from public available sources + * We used to propose to run `./gradlew assemble` but this leads to: + * - building artifacts that are not required for a user (e.g. internally used test fixtures) + * - failure when building docker images with non public base images (e.g. wolfi) + * */ +tasks.register("assemblePublicArtifacts").configure { + group = 'build' + description = 'Builds all artifacts including public docker images that are build from public available sources' + + dependsOn allprojects.findAll { + it.path.startsWith(':distribution:docker') == false + && it.path.startsWith(':ml-cpp') == false + && it.path.startsWith(':distribution:bwc') == false + && it.path.startsWith(':test:fixture') == false + } + .collect { GradleUtils.findByName(it.tasks, 'assemble') } + .findAll { it != null } + + dependsOn ":distribution:docker:buildDockerImage" + dependsOn ":distribution:docker:buildAarch64DockerImage" + dependsOn ":distribution:docker:buildAarch64IronBankDockerImage" + dependsOn ":distribution:docker:buildIronBankDockerImage" + dependsOn ":distribution:docker:buildAarch64UbiDockerImage" + dependsOn ":distribution:docker:buildUbiDockerImage" + +} + tasks.register("spotlessApply").configure { dependsOn gradle.includedBuild('build-tools').task(':spotlessApply') dependsOn gradle.includedBuild('build-tools').task(':reaper:spotlessApply')