Skip to content

Commit

Permalink
fix Issue sous-chefs#100 Don't seem to be able to change the version
Browse files Browse the repository at this point in the history
  • Loading branch information
Benedikt Weiner committed Nov 26, 2013
1 parent b6b4f77 commit e2f7989
Show file tree
Hide file tree
Showing 3 changed files with 46 additions and 3 deletions.
2 changes: 0 additions & 2 deletions attributes/default.rb
Original file line number Diff line number Diff line change
Expand Up @@ -16,8 +16,6 @@
default.elasticsearch[:version] = "0.90.5"
default.elasticsearch[:host] = "http://download.elasticsearch.org"
default.elasticsearch[:repository] = "elasticsearch/elasticsearch"
default.elasticsearch[:filename] = "elasticsearch-#{node.elasticsearch[:version]}.tar.gz"
default.elasticsearch[:download_url] = [node.elasticsearch[:host], node.elasticsearch[:repository], node.elasticsearch[:filename]].join('/')

# === NAMING
#
Expand Down
43 changes: 43 additions & 0 deletions metadata.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
{
"name": "elasticsearch",
"description": "Installs and configures elasticsearch",
"long_description": "Description\n-----------\n\nThis _Chef_ cookbook installs and configures the [_Elasticsearch_](http://www.elasticsearch.org)\nsearch engine on a Linux compatible operating system.\n\nIt requires a working _Java_ installation on the target node; add your preferred `java` cookbook to the node `run_list`.\n\nThe cookbook downloads the _elasticsearch_ tarball (via the [`ark`](http://github.com/bryanwb/chef-ark) provider),\nunpacks and moves it to the directory you have specified in the node configuration (`/usr/local/elasticsearch` by default).\n\nIt installs a service which enables you to start, stop, restart and check status of the _elasticsearch_ process.\n\nIf you include the `elasticsearch::monit` recipe, it will create a configuration file for _Monit_,\nwhich will check whether _elasticsearch_ is running, reachable by HTTP and the cluster is in the \"green\" state.\n(Assumed you have included a compatible [\"monit\" cookbook](http://community.opscode.com/cookbooks/monit)\nin your run list first.)\n\nIf you include the `elasticsearch::aws` recipe, the\n[AWS Cloud Plugin](http://github.com/elasticsearch/elasticsearch-cloud-aws) will be installed on the node,\nallowing you to use the _Amazon_ AWS-related features (node auto-discovery, etc).\nSet your AWS credentials either in the \"elasticsearch/aws\" data bag, or directly in the role/node configuration.\nInstead of using AWS access tokens, you can create the instance with a\n[IAM role](http://aws.amazon.com/iam/faqs/#How_do_i_get_started_with_IAM_roles_for_EC2_instances).\n\nIf you include the `elasticsearch::data` and `elasticsearch::ebs` recipes, an EBS volume will be automatically\ncreated, formatted and mounted so you can use it as a local gateway for _Elasticsearch_.\nWhen the EBS configuration contains a `snapshot_id` value, it will be created with data from the corresponding snapshot. See the `attributes/data` file for more information.\n\nIf you include the `elasticsearch::proxy` recipe, it will configure the _Nginx_ server as\na reverse proxy for _Elasticsearch_, so you may access it remotely with HTTP authentication.\nSet the credentials either in a \"elasticsearch/users\" data bag, or directly in the role/node configuration.\n\nIf you include the `elasticsearch::search_discovery` recipe, it will configure the cluster to use Chef search\nfor discovering Elasticsearch nodes. This allows the cluster to operate without multicast, without AWS, and\nwithout having to manually manage nodes.\n\n\nUsage\n-----\n\n### Chef Solo\n\nYou have to configure your node in a `node.json` file, upload the configuration file, this cookbook and any dependent cookbooks and all data bags, role, etc files to the server, and run `chef-solo`.\n\nA basic node configuration can look like this:\n\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~bash\necho '{\n \"name\": \"elasticsearch-cookbook-test\",\n \"run_list\": [\n \"recipe[java]\",\n \"recipe[elasticsearch]\"\n ],\n\n \"java\": {\n \"install_flavor\": \"openjdk\",\n \"jdk_version\": \"7\"\n },\n\n \"elasticsearch\": {\n \"cluster_name\" : \"elasticsearch_test_chef\",\n \"bootstrap.mlockall\" : false\n }\n}\n' > node.json\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nLet's upload it to our server (assuming Ubuntu on Amazon EC2):\n\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~bash\nscp -o User=ubuntu \\\n -o IdentityFile=/path/to/your/key.pem \\\n -o StrictHostKeyChecking=no \\\n -o UserKnownHostsFile=/dev/null \\\n node.json ec2-12-45-67-89.compute-1.amazonaws.com:\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nLet's download the cookbook on the target system:\n\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~bash\nssh -i /path/to/your/key.pem ec2-12-45-67-89.compute-1.amazonaws.com \\\n \"curl -# -L -k -o cookbook-elasticsearch-master.tar.gz https://github.com/elasticsearch/cookbook-elasticsearch/archive/master.tar.gz\"\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nFinally, let's install latest Chef, install dependent cookbooks, and run `chef-solo`:\n\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~bash\n\nssh -t -i /path/to/your/key.pem ec2-12-45-67-89.compute-1.amazonaws.com <<END\n sudo apt-get update\n sudo apt-get install build-essential curl git vim -y\n curl -# -L http://www.opscode.com/chef/install.sh | sudo bash -s --\n sudo mkdir -p /etc/chef/; sudo mkdir -p /var/chef/cookbooks/elasticsearch\n sudo tar --strip 1 -C /var/chef/cookbooks/elasticsearch -xf cookbook-elasticsearch-master.tar.gz\n sudo apt-get install bison zlib1g-dev libopenssl-ruby1.9.1 libssl-dev libyaml-0-2 libxslt-dev libxml2-dev libreadline-gplv2-dev libncurses5-dev file ruby1.9.1-dev git --yes --fix-missing\n sudo /opt/chef/embedded/bin/gem install berkshelf --version 1.4.5 --no-rdoc --no-ri\n sudo /opt/chef/embedded/bin/berks install --path=/var/chef/cookbooks/ --berksfile=/var/chef/cookbooks/elasticsearch/Berksfile\n sudo chef-solo -N elasticsearch-test-chef-solo -j node.json\nEND\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nVerify the installation with:\n\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~bash\nssh -i /path/to/your/key.pem ec2-12-45-67-89.compute-1.amazonaws.com \"curl localhost:9200\"\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nFor a full and thorough walktrough, please read the tutorial on\n[deploying elasticsearch with _Chef Solo_](http://www.elasticsearch.org/tutorials/deploying-elasticsearch-with-chef-solo/)\nwhich uses this cookbook as an example.\n\nThis cookbook comes with a Rake task which allows to create, bootstrap and configure an Amazon EC2 with a single command. Save your node configuration into `tmp/node.json` file and run:\n\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~bash\ntime \\\n AWS_SSH_KEY_ID=your-key-id \\\n AWS_ACCESS_KEY=your-access-keys \\\n AWS_SECRET_ACCESS_KEY=your-secret-key\\\n SSH_KEY=/path/to/your/key.pem \\\n NAME=elasticsearch-test-chef-solo-with-rake \\\n rake create\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nRun `rake -T` for more information about other available tasks, see the `Rakefile`\nfor all available options and configurations.\n\n### Chef Server\n\nFor _Chef Server_ based deployment, include the recipes you want to be executed in a\ndedicated `elasticsearch` role, or in the node `run_list`.\n\nThen, upload the cookbook to the _Chef_ server:\n\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~bash\n knife cookbook upload elasticsearch\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nTo enable the _Amazon_ AWS related features, include the `elasticsearch::aws` recipe.\nYou will need to configure the AWS credentials.\n\nYou may do that in the node configuration (with `knife node edit MYNODE` or in the _Chef Server_ console),\nin a role with `override_attributes` declaration, but it is arguably most convenient to store\nthe information in an \"elasticsearch\" _data bag_:\n\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~bash\n mkdir -p ./data_bags/elasticsearch\n echo '{\n \"id\" : \"aws\",\n \"_default\" : {\n \"discovery\" : { \"type\": \"ec2\" },\n\n \"cloud\" : {\n \"aws\" : { \"access_key\": \"YOUR ACCESS KEY\", \"secret_key\": \"YOUR SECRET ACCESS KEY\" },\n \"ec2\" : { \"groups\": \"elasticsearch\" }\n }\n }\n }' > ./data_bags/elasticsearch/aws.json\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nDo not forget to upload the data bag to the _Chef_ server:\n\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~bash\n knife data bag from file elasticsearch aws.json\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nTo use the EBS related features, use your preferred method of configuring node attributes,\nor store the configuration in a data bag called `elasticsearch/data`:\n\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~json\n {\n \"elasticsearch\": {\n // ...\n \"data\" : {\n \"devices\" : {\n \"/dev/sda2\" : {\n \"file_system\" : \"ext3\",\n \"mount_options\" : \"rw,user\",\n \"mount_path\" : \"/usr/local/var/data/elasticsearch/disk1\",\n \"format_command\" : \"mkfs.ext3\",\n \"fs_check_command\" : \"dumpe2fs\",\n \"ebs\" : {\n \"size\" : 250, // In GB\n \"delete_on_termination\" : true,\n \"type\" : \"io1\",\n \"iops\" : 2000\n }\n }\n }\n }\n }\n }\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n\nCustomizing the cookbook\n------------------------\n\nWhen you want to significantly customize the cookbook - changing the templates, adding a specific logic -,\nthe best way is to use the \"wrapper cookbook\" pattern: creating a lightweight cookbook which will\ncustomize this one. Let's see how to change the template for the `logging.yml` file in this way.\n\nFirst, we need to create our \"wrapper\" cookbook:\n\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~bash\nknife cookbook create my-elasticsearch --cookbook-path=. --verbose --yes\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nNext, we'll include the main cookbook in our _default_ recipe:\n\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~bash\ncat <<-CONFIG >> ./cookbooks/my-elasticsearch/recipes/default.rb\n\ninclude_recipe 'java'\ninclude_recipe 'elasticsearch::default'\nCONFIG\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nThen, we'll change the `cookbook` for the appropriate template resource:\n\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~bash\ncat <<-CONFIG >> ./cookbooks/my-elasticsearch/recipes/default.rb\n\nlogging_template = resources(:template => \"logging.yml\")\nlogging_template.cookbook \"my-elasticsearch\"\nCONFIG\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nOf course, we may redefine the whole `logging.yml` template definition, or other parts of the cookbook.\n\nDon't forget to put your custom template into the appropriate path:\n\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~bash\ncat <<-CONFIG >> ./cookbooks/my-elasticsearch/templates/default/logging.yml.erb\n# My custom logging template...\nCONFIG\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nWe can configure a node with our custom cookbook, now:\n\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~bash\necho '{\n \"name\": \"elasticsearch-wrapper-cookbook-test\",\n \"run_list\": [\n \"recipe[my-elasticsearch]\"\n ]\n' > node.json\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nUpload your \"wrapper\" cookbook to the server, and run Chef on the node,\neg. following the instructions for _Chef Solo_ above:\n\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~bash\nscp -R ... cookbooks/my-elasticsearch ...\nssh ... \"sudo mv --force --verbose /tmp/my-elasticsearch /var/chef/cookbooks/my-elasticsearch\"\nssh ... <<END\n....\nEND\nssh ... \"sudo chef-solo -N elasticsearch-wrapper-cookbook-test -j node.json\"\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n\nNginx Proxy\n-----------\n\nUsually, you will restrict the access to _Elasticsearch_ with firewall rules. However, it's convenient\nto be able to connect to the _Elasticsearch_ cluster from `curl` or a HTTP client, or to use a\nmanagement tool such as [_BigDesk_](http://github.com/lukas-vlcek/bigdesk) or\n[_Paramedic_](http://github.com/karmi/elasticsearch-paramedic).\n(Don't forget to set the `node.elasticsearch[:nginx][:allow_cluster_api]` attribute to _true_\nif you want to access these tools via the proxy.)\n\nTo enable authorized access to _elasticsearch_, you need to include the `elasticsearch::proxy` recipe,\nwhich will install, configure and run [_Nginx_](http://nginx.org) as a reverse proxy, allowing users with proper\ncredentials to connect.\n\nUsernames and passwords may be stored in a data bag `elasticsearch/users`:\n\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~bash\n mkdir -p ./data_bags/elasticsearch\n echo '{\n \"id\" : \"users\",\n \"_default\" : {\n \"users\" : [\n {\"username\" : \"USERNAME\", \"password\" : \"PASSWORD\"},\n {\"username\" : \"USERNAME\", \"password\" : \"PASSWORD\"}\n ]\n }\n }\n ' > ./data_bags/elasticsearch/users.json\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nAgain, do not forget to upload the data bag to the _Chef_ server:\n\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~bash\n knife data bag from file elasticsearch users.json\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nAfter you have configured the node and uploaded all the information to the _Chef_ server,\nrun `chef-client` on the node(s):\n\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~bash\n knife ssh name:elasticsearch* 'sudo chef-client'\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nPlease note that all data bags _must_ have attributes enclosed in an environment\n(use the `_default` environment), as suggested by the Chef\n[documentation](http://docs.opscode.com/chef/essentials_data_bags.html#use-data-bags-with-environments).\n\n\nTesting with Vagrant\n--------------------\n\nThe cookbook comes with a [`Vagrantfile`](https://github.com/elasticsearch/cookbook-elasticsearch/blob/master/Vagrantfile), which allows you to test-drive the installation and configuration with\n[_Vagrant_](http://vagrantup.com/), a tool for building virtualized infrastructures.\n\nFirst, make sure, you have both _VirtualBox_ and _Vagrant_\n[installed](http://docs.vagrantup.com/v1/docs/getting-started/index.html).\n\nThen, clone this repository into an `elasticsearch` directory on your development machine:\n\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~bash\n git clone git://github.com/elasticsearch/cookbook-elasticsearch.git elasticsearch\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nSwitch to the cloned repository:\n\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~bash\n cd elasticsearch\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nInstall the neccessary gems with [Bundler](http://gembundler.com):\n\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~bash\n gem install bundler\n bundle install\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nAll the required third-party cookbooks will be automatically installed via the\n[_Berkshelf_](http://berkshelf.com) integration. If you want to install them\nlocally (eg. to inspect them), use the `berks` command:\n\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~bash\n berks install --path ./tmp/cookbooks\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nThe `Vagrantfile` supports four Linux distributions:\n\n* Ubuntu Precise 64 bit\n* Ubuntu Lucid 32 bit\n* Ubuntu Lucid 64 bit\n* CentOS 6 32 bit\n\nUse the `vagrant status` command for more information.\n\nWe will use the [_Ubuntu Precise 64_](http://vagrantup.com/v1/docs/boxes.html) box for the purpose of this demo.\nYou may want to test-drive this cookbook on a different distribution; check out the available boxes at <http://vagrantbox.es> or build a custom one with [_veewee_](https://github.com/jedi4ever/veewee/tree/master/templates).\n\nLaunch the virtual machine (it will download the box unless you already have it):\n\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~bash\n time CHEF=latest bundle exec vagrant up precise64\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nThe machine will be started and automatically provisioned with\n[_chef-solo_](http://vagrantup.com/v1/docs/provisioners/chef_solo.html).\n(Note: You may substitute _latest_ with a specific Chef version.\nSet the `UPDATE` environment variable to update packages on the machine as well.)\n\nYou'll see _Chef_ debug messages flying by in your terminal, downloading, installing and configuring _Java_,\n_Nginx_, _Elasticsearch_, and all the other components.\nThe process should take less then 10 minutes on a reasonable machine and internet connection.\n\nAfter the process is done, you may connect to _elasticsearch_ via the _Nginx_ proxy from the outside:\n\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~bash\n curl 'http://USERNAME:PASSWORD@33.33.33.10:8080/test_chef_cookbook/_search?pretty&q=*'\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nOf course, you should connect to the box with SSH and check things out:\n\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~bash\n bundle exec vagrant ssh precise64\n\n ps aux | grep elasticsearch\n service elasticsearch status --verbose\n curl http://localhost:9200/_cluster/health?pretty\n sudo monit status elasticsearch\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n\nTests\n-----\n\nThe cookbook provides test cases in the `files/default/tests/minitest/` directory,\nwhich are executed as a part of the _Chef_ run in _Vagrant_\n(via the [Minitest Chef Handler](https://github.com/calavera/minitest-chef-handler) support).\nThey check the basic installation mechanics, populate the `test_chef_cookbook` index\nwith some sample data, perform a simple search, etc.\n\n\nRepository\n----------\n\nhttp://github.com/elasticsearch/cookbook-elasticsearch\n\nLicense\n-------\n\nAuthor: Karel Minarik (<karmi@elasticsearch.com>) and [contributors](http://github.com/elasticsearch/cookbook-elasticsearch/graphs/contributors)\n\nLicense: Apache\n",
"maintainer": "karmi",
"maintainer_email": "karmi@karmi.cz",
"license": "Apache",
"platforms": {
},
"dependencies": {
"ark": ">= 0.0.0"
},
"recommendations": {
"build-essential": ">= 0.0.0",
"xml": ">= 0.0.0",
"java": ">= 0.0.0",
"monit": ">= 0.0.0"
},
"suggestions": {
},
"conflicting": {
},
"providing": {
"elasticsearch": ">= 0.0.0",
"elasticsearch::data": ">= 0.0.0",
"elasticsearch::ebs": ">= 0.0.0",
"elasticsearch::aws": ">= 0.0.0",
"elasticsearch::nginx": ">= 0.0.0",
"elasticsearch::proxy": ">= 0.0.0",
"elasticsearch::plugins": ">= 0.0.0",
"elasticsearch::monit": ">= 0.0.0",
"elasticsearch::search_discovery": ">= 0.0.0"
},
"replacing": {
},
"attributes": {
},
"groupings": {
},
"recipes": {
},
"version": "0.3.7"
}
Loading

0 comments on commit e2f7989

Please sign in to comment.