Having tried to read the documentation for an older version than I'd downloaded and furthermore one for *Nix when I'm using Windows, I eventually restarted at the "Learn" pages on https://www.elastic.co/
There are a lot of links in there, and it's easy to get lost, but it is very well written.
This is my executive summary of what I think I did.
First, download the zip of kibana and elasticsearch.
From the bin directory for elasticsearch, run elasticsearch.bat file, or run service install then service run. If you run the batch file it will spew logs to the console, as well as a log file (in the logs folder). You can tail the file if you choose to run it as a service. Either works.
If you then open http://localhost:9200/ in a suitable browser you should see something like this:
{ "name" : "Barbarus", "cluster_name" : "elasticsearch", "cluster_uuid" : "bE-p5dLXQ_69o0FWQqsObw", "version" : { "number" : "2.4.1", "build_hash" : "c67dc32e24162035d18d6fe1e952c4cbcbe79d16", "build_timestamp" : "2016-09-27T18:57:55Z", "build_snapshot" : false, "lucene_version" : "5.5.2" }, "tagline" : "You Know, for Search" }
The name is a randomly assigned Marvel character. You can configure all of this, but don't need to just to get something up and running to explore. kibana will expect elasticsearch to be on port 9200, but again that is configurable. I am getting ahead of myself though.
Second, unzip kibana, and run the batch file kibana.bat in the bin directory. This will witter to itself. This starts a webserver, on port 5601 (again configurable, but this by default): so open http://localhost:5601 in your browser.
kibana wants an "index" (way to find data), so we need to get some into elasticsearch: the first page will say "Configure an index pattern". This blog has a good walk through of kibana (so do the official docs).
All of the official docs tell you to use curl to add (or CRUD) data in elasticsearch, for example
curl -XPUT 'localhost:9200/customer/external/1?pretty' -d ' { "name": "John Doe" }'
NEVER try that from a Windows prompt, even if you have a curl library installed. You need to escape out the quote, and even then I had trouble. You can put the data (-d part) in a file instead and use @, but it's not worth it.
Python to the rescue. And Requests:HTTP for Humans
to the rescue.
Now I can run the instructions in Python instead of shouting at a cmd prompt.import requests
r = requests.get('http://localhost:9200/_cat/health?v')
r.text
Simple. The text shows me the response. There is a status code property too. And other gooides. The the manual. For this simple get command you could just point your browser at localhost:9200/_cat/health?v
Don't worry if the status is yellow - this just means you only have omne node so it can't replicate in cause of disaster.
Notice the transport, http:// at the start. If you forget this, you'll get an error like
>>> r = requests.put('localhost:9200/customer/external/1?pretty', json={"name": "John Doe"})
...
raise InvalidSchema("No connection adapters were found for '%s'" % url) requests.exceptions.InvalidSchema: No connection adapters were found for 'localhost:9200/customer/external/1?pretty'
Now we can put in some data.
First make an index (elastic might add this if you try to put data under a non-existent index). We will then be able to point kibana at that index - I mentioned kibana wanted an index earlier.
r = requests.put('http://localhost:9200/customer?pretty')
Right, now we want some data.
>>> payload = {'name': 'John Doe'}
>>> r = requests.post('http://localhost:9200/customer/external/1?pretty', json=payload)
If you point your browser at localhost:9200/customer/external/1?pretty you (should) then see the data you created. We gave it an id of 1, but it will be automatically assigned a unique id if we left that off.
We can use requests.delete to delete, and requests.post to update:
>>> r = requests.post('http://localhost:9200/customer/external/1/_update', \
json={ "doc" : {"name" : "Jane Doe"}})
Now, this small record set won't be much use to us. The docs have a link to some json data. I downloaded some ficticious account data. SO to the rescue for uploading the file:
>>> with open('accounts.json', 'rb') as payload:
... headers = {'content-type': 'application/x-www-form-urlencoded'}
... r = requests.post('http://localhost:9200/bank/account/_bulk?pretty', \
data=payload, verify=False, headers=headers)
...
>>> r = requests.get('http://localhost:9200/bank/_search?q=*&pretty')
>>> r.json()This is equivalent to using
>>> r = requests.post('http://localhost:9200/bank/_search?pretty', \
json={"query" : {"match_all": {}}})
i.e. instead of q=* in the uri we have put it in the rest body.
Either way, you now have some data which you can point kibana at. In kibana, the discover tab allows you to view the data by clicking through fields. The visualise tab allows you to set up graphs. What wasn't immeditely apparent was once you have selected your buckets, fields and so forth, you need to press the green "play" button by the "options" to make it render your visualisation. And finally, I got a pie chart of the data. I now need to point it at some real data.
No comments:
Post a Comment