mobz / elasticsearch-head Goto Github PK
View Code? Open in Web Editor NEWA web front end for an elastic search cluster
Home Page: http://mobz.github.io/elasticsearch-head/
License: Other
A web front end for an elastic search cluster
Home Page: http://mobz.github.io/elasticsearch-head/
License: Other
Displaying of index status (info -> index status) does not do anything.
Index metadata works fine. Console shows:
Uncaught JsonPretty error: Cannot read property 'constructor' of null widgets.js:1239
es.JsonPretty.acx.ui.Widget.extend._main_template widgets.js:1239
es.JsonPretty.acx.ui.Widget.extend.init widgets.js:1223
prototype.(anonymous function) jsacx.js:538
Class jsacx.js:522
es.ui.JsonPanel.acx.ui.InfoPanel.extend._body_template widgets.js:32
prototype.(anonymous function) jsacx.js:538
acx.ui.DraggablePanel.acx.ui.AbstractPanel.extend.init jsacx-widgets.js:389
prototype.(anonymous function) jsacx.js:538
Class jsacx.js:522
index.name.children.children.acx.ui.MenuButton.menu.acx.ui.MenuPanel.items.onclick widgets.js:1001
jQuery.event.handle jquery.js:2926
elemData.handle.eventHandle
No request is made to ES. Tested on Chrome and Firefox - fails on both. ES 0.90.1.
I have an nginx proxy using a config https://gist.github.com/khushil/7098336 and I can see the 'Elasticsearch Head' browser window title but then just a blank screen.
I'm thinking something do with the proxy_pass but I was wondering if anyone else has had any experience of this?
I created some mapping with a property (person
) that is an object:
curl -XPUT http://localhost/twitter/tweet/_mapping -d
'{
"tweet" : {
"properties" : {
"person" : {
"type" : "object",
"properties" : {
"name" : {
"properties" : {
"first_name" : {"type" : "string"},
"last_name" : {"type" : "string"}
}
},
"sid" : {"type" : "string", "index" : "not_analyzed"
}
}
},
"message" : {"type" : "string"}
}
}
}'
The problem is that the field person
does not show up in Browser
. The same issue occurs when person
is mapped as an array.
It would be good to have funtion to import\export settings, in particulr the custom queries and tranformation functions from "Any Request" panel.
Even better would be good to store settings and queries\transformation functions in cluster. If some one would have doubts about 'security' of cluster, so maybe the feature can be activated or run e.g. by button 'store settings to cluster' manually and if someone don't want it, so he don't use it and then there is no 'risk'. The es-head will check by connection to cluster if there is the table and if yes read it out automaticly.
if field type is multi_field
then it is not show in result table and other columns also are not shown at all
in browser tab data is now show at all
example field config
"bodytext": {
"type": "multi_field",
"fields": {
"bodytext": {
"type": "string",
"store": "yes",
"term_vector": "with_positions_offsets",
"null_value": ""
},
"exact": {
"type": "string",
"analyzer": "text_exact",
"store": "yes",
"term_vector": "with_positions_offsets",
"include_in_all": false
}
}
},
structured query not possible in multi_field type field at all.
We use elasticsearch-head on an elasticsearch cluster that is indexing live system log traffic. When searching and viewing the log data, it would be very useful to have something that automatically updates (kinda like tail -f I guess) as new events arrive.
Thanks, love your work, "eh" rocks!
cheers.
I am using ES for storing Logstash-created indexes, which are named using the "logstash-YYYY-MM-DD" format.
With the current sorting (by index name ascending) in Head, the most recent logstash index is shown on the most right.
Having lots of indexes, and almost always being interested in today's index, requires a lot of scrolling to the right.
Maybe add a sort asc/desc option?
would be nice to have a "export to CSV" option for all kind of searches
sometimes it is important to take data and do some Excel magic with it
P.S. thank you for this already great plugin! :)
if my document structure is:
{ a : { b : { c : "c-val" } } }
and I search for a.b.c = "c-val"
the json search string displayed on selecting "Show query source"
query: {
bool: {
must: [
{
query_string: { default_field: c
query: c-val...
instead of the correct qualified query (hence the previous query fails)
query: {
bool: {
must: [
{
query_string: { default_field: a.b.c
query: c-val...
As the subject says, I am unable to connect to elasticsearch-0.90.1 from elasticsearch-head? Is this a known issue? Or am I doing something wrong?
(what it looks like from the stack trace below, is a version mismatch in the protocol used in the elasticsearch HTTP API)
What I did:
Here is the cmd.exe log message with a stack trace:
[2013-06-18 12:48:31,563][WARN ][transport.netty ] [Hercules] exception caught on transport layer [[id: 0x362589a8, /127.0.0.1:63472 :> /127.0.0.1:9300]], closing connection
java.io.StreamCorruptedException: invalid internal transport message format
at org.elasticsearch.transport.netty.SizeHeaderFrameDecoder.decode(SizeHeaderFrameDecoder.java:27)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:425)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.cleanup(FrameDecoder.java:482)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.channelDisconnected(FrameDecoder.java:365)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:102)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:74)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at org.elasticsearch.common.netty.channel.Channels.fireChannelDisconnected(Channels.java:396)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.close(AbstractNioWorker.java:336)
at org.elasticsearch.common.netty.channel.socket.nio.NioServerSocketPipelineSink.handleAcceptedSocket(NioServerSocketPipelineSink.java:81)
at org.elasticsearch.common.netty.channel.socket.nio.NioServerSocketPipelineSink.eventSunk(NioServerSocketPipelineSink.java:36)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendDownstream(DefaultChannelPipeline.java:574)
at org.elasticsearch.common.netty.channel.Channels.close(Channels.java:812)
at org.elasticsearch.common.netty.channel.AbstractChannel.close(AbstractChannel.java:197)
at org.elasticsearch.transport.netty.NettyTransport.exceptionCaught(NettyTransport.java:505)
at org.elasticsearch.transport.netty.MessageChannelHandler.exceptionCaught(MessageChannelHandler.java:224)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:112)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.exceptionCaught(FrameDecoder.java:377)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:112)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:74)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at org.elasticsearch.common.netty.channel.Channels.fireExceptionCaught(Channels.java:525)
at org.elasticsearch.common.netty.channel.AbstractChannelSink.exceptionCaught(AbstractChannelSink.java:48)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.notifyHandlerException(DefaultChannelPipeline.java:658)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:566)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:74)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:107)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:312)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:88)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)
Strange thing, calling http://my_ip:9200/_plugin/head/ there is nothing displayed, just an empty screen. I use Firefox 20.0 on Linux.
Have tried Chromium 27.x which works.
With Firefox I can display the source code of the page and see the content of index.html.
The error console says:
Error: TypeError: localStorage is null
Quelldatei: http://my_ip:9200/_plugin/head/lib/es/widgets.js
Line: 1386
Error: TypeError: es.ElasticSearchHead is not a constructor
Quelldatei: http://my_ip:9200/_plugin/head/
Line: 45
Is this a javascript or a firefox problem?
Provide a form to retrieve a document by its id. This could be added to maybe the Browser tab or the Structured Query tab.
Advanced usage: Allow searching on the _uid field in the Structured Query tab (of course, the user would have to know how the internal _uid field is constructed).
Right now you just have to click the little 'x' button next to the alias name. Could it be done with a prompt? Just like the way it is to delete an entire index?
If I create a query that's a combination of a single 'must' and any number of 'should' clauses, the 'should' clauses don't get sent. I think the fix will be to remove the lines 1268-1270 of widgets.js - you're removing a default clause that has in fact already been replaced by the 'should' clause.
For those of us with LOTS of indexes, it'd be nice to either select a specific one via dropdown select, or have a search which would filter out non-matching indexes so we don't have to scroll sideways as far.
Hello. I think that browser's column names should be field's path and not field's name.
I have many documents similar to:
{
field1: {
name: "name1",
value: "value2"
},
field2:{
name: "name2",
value: "value2"
},
etc.
And a browser is unusable because only last field is shown.
Columns names should be field1.name, field1.value, field2.name, field2.value instead of name and value.
I hacked it in following way:
in core.js I changed line 342 from
var field_name = metadata.paths[dpath].field_name;
to
var a = path.concat(prop);
a.shift();
a.shift();
var field_name = a.join(".");
I know that's not pretty solution but it works for me.
Maybe it'd be a good addition to elasticsearch-head?
Also I'd like my array type fields to be shown fully, and not only last of them. Is it possible?
It is better to see something representing the array than nothing at all.
In the Browser or Search Query tabs, complex types from the JSON such as arrays are not rendered at all in the result table.
When the code detects arrays, the easiest way is to concatenate all the values as a coma-separated list and render this as a string.
More elegant is to create a cell per element and span all the element cells them over a bigger cell representing the whole array.
Hello,
New ElasticSearch world, I've discovered this plugin HEAD which eases a lot the ES experience.
However, after letting several hours HEAD plugin opened, my browser came to use a lot of memory, several Go.
This behaviour can be easily reproduced (Firefox and Chromium) using massively the Quickly refresh functionality which maded the memory footprint growing a lot.
Is it possible to limit the global memory footprint?
New feature for Parent\Child support would be good. Here description of the feature
'http://www.elasticsearch.org/guide/reference/mapping/parent-field.html'.
If "_parent" field exists a type, then it would be good to have it as column in the table view and
if you click on _parent you get the parent json (get request to parent type with the value of '_parent') in a popover.
I created multi field index but am not able to search from it using the head web browser
example below
indexes :NAME,
:type => 'multi_field',
:fields => {
:NAME => { :type => 'string', :analyzer => 'snowball', :boost => 10.0 },
:exact => { :type => 'string', :index => :not_analyzed }
}
Many Thanks
Hey there,
The ability to set an autorefresh delay via the URL would be very handy, in the same way that the base_uri can be set.
Is this something you would consider please?
Thanks!
Ryan
Big number in value is not covered
If field type is long and that field's value is 476909204310851599
my search query is
{
"size": 100,
"query": {
"term": {
"LOG_NO": 476909204310851599
}
}
}
If i push Request button in Any Request Tab. search query is changed like this
{
"size": 100,
"query": {
"term": {
"LOG_NO": 476909204310851600
}
}
}
In result view table, that situation is also appeared.
I have created some filtered aliases over a index. All the aliases are listed in browser page but when I selected any specific alias the no of hits remain the same which is not possible. All alias return the data same as the index on which they are created.
For every index, instead of showing:
nameofindex
size: 120GB
docs: 112.123
it's showing
nameofindex
size: undefined
docs: 112.123
If you use ElasticSearch in production, you will need to protect access to the ElasticSearch API if it contains non public data. I guess this will usually be done with a firewall, but there are cases where simple HTTP authentication makes sense - for instance if you want to give developers / testers direct access by routing the API through an Apache and protecting the routed location. It would be nice if this was supported by elasticsearch-head.
Hi,
I can see facets code in the JS but its not visible in the front end.
May I know how to use it?
Thanks,
Shoeb
I have two indexes A and B both with Mapping type Mapping 1.
With just index A the browser tab returns the data table and filters on the left. After adding Index B with the same mapping as Index A the Browser tab shows no data.
Tested On:
Chrome 27.0.1453.116 m
Firefox 21.0
I will run a few more tests to see if I can narrow the problem down.
Hi!
With this commit you can build a package for elasticsearch-head on all Redhat Systems or any other RPM based system.
Have fun!
As soon as I try to open a document containing null
from the Browser tab, the JavaScript console logs Uncaught JsonPretty error: Cannot read property 'constructor' of null
.
It's pretty easy to reproduce with:
curl -XPOST localhost:9200/test/head/a -d '{"foo":null}'
curl -XPOST localhost:9200/test/head/b -d '{"foo":0}'
a
fails, b
opens just fine.
Reproduced with:
Is it possible to connect to localhost on the machine that head is running on (e.g. http://127.0.0.1:9200/ )? Currently this gives an error, but would be a nice feature for machines that are heavily firewalled.
I have some fields where text are stored as html. It would be great to render those values in "detail box\table" as html.
Maybe some automatic detection if the text is html, e.g. check if text has some html tags, like contains of <html>
,<p>
,<div>
,...
If ES is running behind HTTP proxy which requires username/password for authentication, Head doesn't ask for the credentials and just hangs.
Could we have option for providing username/password before ES calls?
show size and docs to node info, show total size and total docs to left-top blank area
I'd love to see some commas :-) I have millions of docs in my collections, so it's hard to read the number of docs shown on the home page. E.g. 11,050,018 would be much easier to read than 11050018.
To recreate:
bin/plugin -install mobz/elasticsearch-head
http://localhost:9200/_plugin/head/
Work-around:
On the Browser tab, there appears to be a 50 item limit. It's not lited on the page anywhere that I can see. In fact, when I browser an index that has 52 items in it, I get a message that there are 52 hits... but not that it's only showing 50 of the results.
This is a major usability problem; I just spent a half hour debugging my code to see why my document wasn't saving when in fact it was, but happened to be record 51. :-(
At bare minimum, this page needs to clearly indicate that only the first 50 records (by some definition of first) will be shown. Ideally, it would be paged so I can browser through the results, preferably with a user-selectable pager size.
not sure if its just my elastichead or all of them. but since the default option for any request tab is blank with the method delete i have deleted my entire database, luckly it was test data that i can push from a script.
new API '_stats' added in master (0.18) that would be very nice to display both in raw JSON as per norm, but a nice tabular overview perhaps ?
The query body is tacked onto the URL in a surprising way, yielding a malformed query, and as it happens, all docs in the index are always returned. Described in perfect detail here:
http://stackoverflow.com/questions/12195017/different-result-when-using-get-post-in-elastic-search
I found this very confusing as someone first approaching ES and the tool.
Its probably something stupid I'm missing but is there a way to see more than the first 50 items when browsing or page through the results?
A pedantic note but LICENSE says at the appendix:
Copyright [yyyy] [name of copyright owner] which should be replaced with your copyright notices.
There is already support for http and https protocols in order to render href links, so it would be good to support file:/// protocol also, e.g. in case of desktop search application based on elasticsearch the user can click to such file:/// link and open the file
The change would be at line 1120 in widgets.js
"value": function (type, value) {
if (/^(http|https):\/\/[^\s]+$/.test(value)) { //here to check if ist also file:/// pattern
Hello -
When i disable _source with enabled:false and store:yes, the search does not show up the store:yes fields. I am able to see the field's in the structured query only if _source is enabled.
Is this a known issue?
Pl see the email thread below for more details:
---------- Forwarded message ----------
Date: Fri, Nov 15, 2013 at 2:31 AM
Subject: Re: Index size not getting displayed in head after upgrade to 1.0.0.beta1
To: [email protected]
Hey,
IIRC elasticsearch stopped returning the human readable size (like 5.6GB) by default, they need to be specified with the human parameter on a HTTP request - this is enabled by default in 0.90, but disabled in 1.0.
Hopefully the head maintainer is reading this, otherwise can you file a bug report for the head plugin maybe? Thanks!
On Fri, Nov 15, 2013 at 3:45 AM, wrote:
hi team - I started noticing something else as well when I upgraded to 1.0.0.beta1. We use 'head' plugin and it shows the size of each index.
however with ES beta version it shows size: undefined (undefined). anyone else seeing something like this?
-.
Hi,
I can't use the Browser tab to filter some of my elastic-search entries. When I enter a text into one of fields that the browser automatically discovered based on my entries (column on the left), elasticsearch does not filter correctly the entries on the right-hand side (it screenout every thing ..)
I can't find any doc explaning how we can use this "field" text boxes. Do we just enter the text we want the query to filter on ?
Thanks for your support
Regards
When a geo_point type'd field is selected, the drop-down to the right (that normally shows term/prefix/fuzzy etc) is empty and there's no text input field beside it.
When ElasticSearch is running with one or more unassigned replicas the Cluster Overview page breaks with a javascript error:
Uncaught TypeError: Cannot read property 'name' of undefined
Can be reproduced using ElasticSearch 0.19.8 with the following index:
$ curl -XPUT 'http://localhost:9200/twitter/' -d '{
"settings" : {
"number_of_shards" : 3,
"number_of_replicas" : 2
}
}'
Hi,
I have installed ES-Head on the same server as the ES server. The issue I am having is that I want to access ES via the extremely useful ES-Head pages but it appears that I also have to have port 9200 open on the firewall.
Is there any way I can configure ES-head to read directly from the localhost instead of the browser sending requests to port 9200?
Many thanks,
Ian Lewis
Hi,
On my 0.90.6 installation with only 1 node and 1 index i've got a blank home page. There is a javavascript error in app.js line 3038 :
Uncaught TypeError: Cannot read property 'attributes' of undefined
Content of the "node" object :
node: Object
cluster: undefined
master_node: false
name: "Unassigned"
routings: Array[1]
stats: undefined
I have a cluster with one data node and one client node. The plugin installs properly on the data node, but not the client node. As you can see below the install ostensibly succeeds; however, the _site directory is not created. Visiting http://localhost:9200/_plugin/head results in a 301 with curl and a 404 in the browser.
Is this a known issue with plugins on client nodes?
[xxx@yyy plugins]$ sudo /usr/share/elasticsearch/bin/plugin -v --install mobz/elasticsearch-head
-> Installing mobz/elasticsearch-head...
Trying https://github.com/mobz/elasticsearch-head/archive/master.zip...
Downloading ........................DONE
Installed mobz/elasticsearch-head into /usr/share/elasticsearch/plugins/head
Identified as a _site plugin, moving to _site structure ...
Installed mobz/elasticsearch-head into /usr/share/elasticsearch/plugins/head/_site
[xxx@yyy plugins]$ ls -al head/
dist/ index.html README.textile
elasticsearch-head.sublime-project .jshintrc src/
.gitignore LICENCE test/
Gruntfile.js package.json
[xxx@yyy plugins]$ curl -v http://localhost:9200/_plugin/head
* About to connect() to localhost port 9200 (#0)
* Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 9200 (#0)
> GET /_plugin/head HTTP/1.1
> User-Agent: curl/7.29.0
> Host: localhost:9200
> Accept: */*
>
< HTTP/1.1 301 Moved Permanently
< Content-Type: text/html
< Access-Control-Allow-Origin: *
< Content-Length: 72
< Server: Jetty(8.1.4.v20120524)
<
* Connection #0 to host localhost left intact
<head><meta http-equiv="refresh" content="0; URL=/_plugin/head/"></head>
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.