GithubHelp home page GithubHelp logo

apache / lucene Goto Github PK

View Code? Open in Web Editor NEW
2.4K 84.0 935.0 492.93 MB

Apache Lucene open-source search software

Home Page: https://lucene.apache.org/

License: Apache License 2.0

Emacs Lisp 0.01% Java 97.70% Python 0.78% Shell 0.05% Lex 0.30% HTML 1.05% Perl 0.06% ANTLR 0.01% Batchfile 0.01% Gnuplot 0.01% CSS 0.01% JavaScript 0.02% Groovy 0.01% Makefile 0.01% Jinja 0.01%
lucene search nosql java backend search-engine information-retrieval

lucene's Introduction

Apache Lucene

Lucene Logo

Apache Lucene is a high-performance, full-featured text search engine library written in Java.

Build Status

Online Documentation

This README file only contains basic setup instructions. For more comprehensive documentation, visit:

Building

Basic steps:

  1. Install OpenJDK 21.
  2. Clone Lucene's git repository (or download the source distribution).
  3. Run gradle launcher script (gradlew).

We'll assume that you know how to get and set up the JDK - if you don't, then we suggest starting at https://jdk.java.net/ and learning more about Java, before returning to this README.

See Contributing Guide for details.

Contributing

Bug fixes, improvements and new features are always welcome! Please review the Contributing to Lucene Guide for information on contributing.

Discussion and Support

lucene's People

Contributors

anshumg avatar caomanhdat avatar cpoerschke avatar ctargett avatar cutting avatar daddywri avatar dsmiley avatar dweiss avatar erikhatcher avatar gsingers avatar hossman avatar iverase avatar janhoy avatar joel-bernstein avatar jpountz avatar kojisekig avatar markrmiller avatar mikemccand avatar mkhludnev avatar noblepaul avatar rmuir avatar romseygeek avatar s1monw avatar sarowe avatar shalinmangar avatar sigram avatar tflobbe avatar tteofili avatar uschindler avatar yonik avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

lucene's Issues

[PATCH] AND match fails if any Term is filtered out by an analyser. [LUCENE-35]

If i do an AND search with an StandardAnalyzer the word 'it' (or better IT :-)
for infromation technology ) is cut out by the Analyzer. That is ok. I get no
result. But wenn i search for 'it' and 'plus' (plus is not cut out by the
analyzer). The result is empty, too. But that is not fine, of cause if i search
only for 'plus' i get an result.
So, i think if an word is thrown away by the analyzer, this part of the and
query should hava no affect to the rest of the search. It should left out by
the BooleanQuery.
I hope it is easy to fix, because it has much affect of search result.
(I tryed do left out any Analyzer but that wasnt suitable ... sorry for my bad
english)


Migrated from LUCENE-35 by Michael Wagner, resolved May 27 2006
Environment:

Operating System: other
Platform: Other

Attachments: ASF.LICENSE.NOT.GRANTED--PhraseTest.java, ASF.LICENSE.NOT.GRANTED--QueryParser.stopwords.patch

QueryParser does not recognized negative numbers... [LUCENE-4]

The TermQuery allows .setBost to set a float multiplier. The boost is entered
via a '^'<NUMBER> format in the String query, however, while .setBoost will
take a negative number, the parser does not allow negative numbers due to the
limited description of the <NUMBER> token (QueryParser.jj):

<NUMBER: (<_NUM_CHAR>)+ "." (<_NUM_CHAR>)+ >

The solution is to allow + or - as in:

<NUMBER: (["+","-"])? (<_NUM_CHAR>)+ "." (<_NUM_CHAR>)+ >

This works correctly, properly reading negative numbers.

I have done some simple tests, and negative boost seems to work as expected, by
moving the entry to the end of the list.


Migrated from LUCENE-4 by Alex Paransky, resolved May 27 2006
Environment:

Operating System: other
Platform: All

Attachments: ASF.LICENSE.NOT.GRANTED--QueryParser.jj-1.29.diff

The JavaDoc is mixed up! [LUCENE-31]

static boolean LINUX - True iff running on Windows.

Are you sure about this one??? :o)

And, yes, 'iff' should be spelled 'if' (check the other descriptions for the
same typo).

Maybe this set of booleans should instead be a set of integers, because since
they are mutually exclusive... and they could be used in the 'switch' block too!

Regards.


Migrated from LUCENE-31 by Joan Roch, resolved May 27 2006
Environment:

Operating System: All
Platform: All

wildcard query lowercase [LUCENE-48]

We have a product which indexes some files. The indexer and the query parser use
the same analyzer. This analyzer applies the LowerCaseFilter to the terms. The
procedure works just fine for most of our queries, but there's a problem when a
more complex query is issued. I will describe the problem in the following examples:

Query: term1 +term2 term3
Result: Works

Query: term1 +term2* term3
Result: Works

Query: term1 +Term2* term3
Result: Doesn't work
It seems that terms containig wildcards are not processed by the analyzer.
As the indexes contain only lowercase words, there will never be hits for this
query.


Migrated from LUCENE-48 by leon, resolved May 27 2006
Environment:

Operating System: other
Platform: Other

Thesauras capability [LUCENE-44]

I would like Lucene to be able to add terms to a search automatically through a
thesauras lookup.


Migrated from LUCENE-44 by Andrig T. Miller, resolved Sep 03 2005
Environment:

Operating System: other
Platform: Other

QueryParser not recognizing asterisk with UTF-8 index [LUCENE-11]

Version: 1.2-RC3

I've created an index of UTF-8 encoded documents and making sure that all
queries are converted to UTF-8. When searching the index with query containing
non-ascii UTF-8 characters and an asterisk, no results are found even though
there are documents that contain the query word. Searching does work when query
doesn't contain non-ascii UTF-8 characters or without asterisk works always.
Test results with swedish words:
"födde" - works ok, returns documents.
"född*" - doesn't return any results.
"född" - works ok, returns documents.
"kom*" - works ok, returns documents.


Migrated from LUCENE-11 by Tero Favorin, resolved May 27 2006
Environment:

Operating System: Linux
Platform: All

Attachments: ASF.LICENSE.NOT.GRANTED--patch8.txt

e-mail token in StandardTokenizer.jj does not match valid e-mail addresses [LUCENE-34]

E-mail token in StandardTokenizer.jj does not match many valid e-mail
addresses. See line 106:

<EMAIL: <ALPHANUM> "@" <ALPHANUM> ("." <ALPHANUM>)+ >

For example, neither [email protected] (because of the dash) nor
[email protected] (because of the first dot and the dash) match.
the following is slightly better, but does not come close to meeting the
specifications of RFC 822:

<EMAIL: <ALPHANUM> ("."|"" <ALPHANUM>)+ "@" <ALPHANUM> ("."|"" <ALPHANUM>)+
>

This is being reported against the May 11 nightly build (I compiled from
source using the supplied Ant build file on RedHat Linux 7.2, jikes, javacc
2.0, and Sun Linux JDK 1.4), however, I originally ran across this problem in
Lucene 1.2 rc4.


Migrated from LUCENE-34 by Dale Anson, resolved May 27 2006
Environment:

Operating System: Linux
Platform: PC

QueryParser blows up when fed malformed query [LUCENE-41]

QueryParser blows up when fed garbage query:

684259 [main] INFO com.lafferty.analyze.Querier - Querying lucene: 'sach^0.1429s^
0.0227 morton h. & co^0.0030. dba company^0.0149'
java.lang.NullPointerException
at org.apache.lucene.queryParser.QueryParser.Term(Unknown Source)
at org.apache.lucene.queryParser.QueryParser.Clause(Unknown Source)
at org.apache.lucene.queryParser.QueryParser.Query(Unknown Source)
at org.apache.lucene.queryParser.QueryParser.parse(Unknown Source)
at org.apache.lucene.queryParser.QueryParser.parse(Unknown Source)


Migrated from LUCENE-41 by ian pojman, resolved Sep 03 2005
Environment:

Operating System: other
Platform: Other

query parser does not close TokenStreams returned by Analyzer [LUCENE-28]

The TokenStream class has a close() method which must be called afterthe token stream has been exhausted (i.e. next() returns null) torelease underlying resources held by the stream, such as a Reader.This method invocation is essential in cases where TokenStreams areexpensive to set up and the Analyzer decides to cache them, becausethe call to close is how the token stream knows to clean up.The QueryParser violates this contract by creating and using TokenStreamsbut not closing them. The getFieldQuery method is an example of this.I suggest rewriting this portion:while (true) { try { t = source.next(); } catch (IOException e) { t = null; } if (t == null) break; v.addElement(t.termText()); }as follows:try { Token t = null; while ((t = source.next()) ![= null) { v.addElement(t.termText()); }} catch (IOException ioe) { // this should be raised, not swallowed as it currently is // the TokenStream writer may have raised an IOException that needs // to be handled} finally { source.close(); // must do this](= null) { v.addElement(t.termText()); }} catch (IOException ioe) { // this should be raised, not swallowed as it currently is // the TokenStream writer may have raised an IOException that needs // to be handled} finally { source.close(); // must do this)}There may be other instances of this problem which also need to have the call to TokenStream.close() added.


Migrated from LUCENE-28 by Eric Friedman, resolved May 27 2006
Environment:

Operating System: Linux
Platform: Other

Attachments: ASF.LICENSE.NOT.GRANTED--lucene.patch

GermanStemmer crashes while indexing [LUCENE-5]

Version: lucene-1.2-rc1.jar

Indexing of "alpha-geek2" works.
Indexing of "alpha-geek" throws exception. (Hope its not my inability.)

demo code which shows exception: http://www.nalle.de/TestIndex.java
Output of code:

indexed 1
java.lang.StringIndexOutOfBoundsException: String index out of range: -1
at java.lang.StringBuffer.charAt(StringBuffer.java:283)
at org.apache.lucene.analysis.de.GermanStemmer.resubstitute(Unknown
Source)
at org.apache.lucene.analysis.de.GermanStemmer.stem(Unknown Source)
at org.apache.lucene.analysis.de.GermanStemFilter.next(Unknown Source)
at org.apache.lucene.analysis.LowerCaseFilter.next(Unknown Source)
at org.apache.lucene.index.DocumentWriter.invertDocument(Unknown Source)
at org.apache.lucene.index.DocumentWriter.addDocument(Unknown Source)
at org.apache.lucene.index.IndexWriter.addDocument(Unknown Source)
at TestIndex.<init>(TestIndex.java:25)
at TestIndex.main(TestIndex.java:36)


Migrated from LUCENE-5 by M. Reinsch, resolved May 27 2006
Environment:

Operating System: Solaris
Platform: Sun

Attachments: ASF.LICENSE.NOT.GRANTED--GermanStemmer_alpha-geek.diff

RangeQuery without lower term and inclusive=false skips blank fields [LUCENE-38]

This was reported by "James Ricci" <[email protected]> at:
http://nagoya.apache.org/eyebrowse/[email protected]&msgNo=1835

When you create a ranged query and omit the lower term, my expectation
would be that I would find everything less than the upper term. Now if I pass
false for the inclusive term, then I would expect that I would find all
terms less than the upper term excluding the upper term itself.

What is happening in the case of lower_term=null, upper_term=x,
inclusive=false is that empty strings are being excluded because
inclusive is set false, and the implementation of RangedQuery creates a default
lower term of Term(fieldName, ""). Since it's not inclusive, it excludes "".
This isn't what I intended, and I don't think it's what most people would
imagine RangedQuery would do in the case I've mentioned.

I equate lower=null, upper=x, inclusive=false to Field < x. lower=null,
upper=x, inclusive=true would be Field <= x. In both cases, the only
difference should be whether or not Field = x is true for the query.


Migrated from LUCENE-38 by Otis Gospodnetic, resolved Nov 13 2008
Environment:

Operating System: other
Platform: Other

Attachments: LUCENE-38.patch, TestRangeQuery.patch

Field.isIndexed() returns false for UnStored fields [LUCENE-30]

I run a loop on documents to discover what fields are searchable.
UnStored fields return false when I call isIndexed(). Converting these
fields to Text fields in the indexer corrects the problems.

Obviously, using Text fields is a workaround, but I think the behaviour is
incorrect.

I've reproduced the issue on MacOS X and Linux. I'm using rc4.


Migrated from LUCENE-30 by Eric Fixler, resolved May 27 2006
Environment:

Operating System: All
Platform: Other

Attachments: ASF.LICENSE.NOT.GRANTED--getIndexedFields.patch

Search lock file location should be configurable and favored over disableLuceneLocks property [LUCENE-47]

[Currently]
We're using Lucene as part of one of our components in a Unix environment and
components might run under different UIDs. In our specific case component A
creates the index folder and component B is performing the search in the same
folder. Folders are readable but for security reasons write protected except for
the indexing component. This is a problem for us since Lucene writes commit.lock
files during search. I recently noticed the following enhancement in CVS:

-FROM LUCENE CHANGES.TXT->
1.3 DEV1

  1. Added the ability to disable lock creation by using disableLuceneLocks
    system property. This is useful for read-only media, such as CD-ROMs.
    (otis)
    <--END

This wouldn't solve our problem because then indexing can corrupt a search
running at the same time.

[Desired proposal]
I rather would appreciate another property that let's me specify a separate and
writable folder for each indexing log file. Read-only media, such as CD-Roms
might even use a /tmp folder .e.g. then. Currently the name is commit.lock. It
probably has to be slightly different to be able to associate lock files with
indexing folders. Therefore, I even suggest to encourage using these over the
disableLuceneLocks.

[Question]
We recently upgraded to 1.2, which let's me assume that 1.3 is far away ;( Is
that correct and what will be the date.

Thanks for consideration,
Andreas


Migrated from LUCENE-47 by Andreas Guenther, resolved May 27 2006
Environment:

Operating System: All
Platform: All

Cannot search numeric values [LUCENE-27]

Lucene seems like it cannot search for numeric values only alpha values. I
indexed a file named 777 and when I did a search on it it is not found. Also
add size value to the contents to be searched and no match is found. Seems
like you cannot search numeric values.


Migrated from LUCENE-27 by Pasha Arshadi, resolved May 27 2006
Environment:

Operating System: other
Platform: Other

Search hits ordering [LUCENE-36]

Version 1.2 RC5

Hi,

Is it possible to change the search hits ordering? I would like to order the
hits on a DateField and not on the hit score.

Thanks.


Migrated from LUCENE-36 by fabrice claes, resolved May 27 2006
Environment:

Operating System: All
Platform: PC

Colon character not searchable by QueryParser [LUCENE-10]

org.apache.lucene.queryParser.QueryParser does not allow the colon character to
be included in search text. When I don't filter colon characters from user
input in Eyebrowse's SearchList servlet, I get the following exception when
seraching for the text "10:" (minus the quotes):

org.apache.lucene.queryParser.ParseException: Encountered "<EOF>" at line 1,
column 3.
Was expecting one of:
"(" ...
<QUOTED> ...
<NUMBER> ...
<TERM> ...
<WILDTERM> ...
<RANGEIN> ...
<RANGEEX> ...

    at

org.apache.lucene.queryParser.QueryParser.generateParseException(Unknown Source)
at org.apache.lucene.queryParser.QueryParser.jj_consume_token(Unknown
Source)
at org.apache.lucene.queryParser.QueryParser.Clause(Unknown Source)
at org.apache.lucene.queryParser.QueryParser.Query(Unknown Source)
at org.apache.lucene.queryParser.QueryParser.parse(Unknown Source)
at org.apache.lucene.queryParser.QueryParser.parse(Unknown Source)
at org.tigris.eyebrowse.LuceneIndexer.search(LuceneIndexer.java:207)
at org.tigris.eyebrowse.core.SearchList.core(SearchList.java:138)


Migrated from LUCENE-10 by Daniel Rall, resolved May 27 2006
Environment:

Operating System: All
Platform: All

Directory implementation that uses ZIP [LUCENE-50]

Would is be possible to create an implementation of Directory that uses ZIP to
compress the index into a single file, i.e. ZIPDirectory? This would help in
distributing indices (with software) to be used on single workstations for read-
only purposes.


Migrated from LUCENE-50 by Jarno Elovirta, resolved Dec 19 2007
Environment:

Operating System: other
Platform: Other

Wildcard query only accepts 1 or more characters for prefixes (not 0 or more) [LUCENE-26]

When using the WildcardQuery class to do a wildcard search with the wildcard at
the end of the string, the wildcard is treated as "1 or more" characters. This
is inconsistent with other wildcard behaviour, and undesirable. The correct
behaviour is for the wildcard to represent 0 or more characters.

The error is due to a problem in the wildcard comparison method in
org.apache.lucene.search.WildcardTermEnum.

Patch to follow.


Migrated from LUCENE-26 by Lee Mallabone, resolved May 27 2006
Environment:

Operating System: Linux
Platform: PC

Attachments: ASF.LICENSE.NOT.GRANTED--WildcardTermEnum.java.diff, ASF.LICENSE.NOT.GRANTED--WildcardTest.java

Delete is not multi-thread safe [LUCENE-12]

Here is a pseudo-code

writer.open()
writer.add(documentA);
writer.close() // this creates segment1 with 1 document

reader.open() // this reader can be opened by another process
writer.open() // this creates segment2 with one document
reader.delete(documentA) // using unique term // here delete is done in-memory
writer.add(documentB)
writer.close() // writer will merge two segments, delete segment2
// and will mark segment1 for deletion because
// reader holds files to segment1 open

reader.close() // reader writes out .del file, but that is too
// late

searcher.open()
searcher.search("term_common_to_docA_and_docB") // returns both docA and docB

It seems that either a) deletes should be write-through, or b) deletes should
be done by the writer, or c) writer should not optimize non-RAM segments unless
asked to. As a client, I like option b) the best, though, this is not the
easiest option to implement. My $0.02


Migrated from LUCENE-12 by Kiril Zack, resolved May 27 2006
Environment:

Operating System: other
Platform: All

reference to ParseException is ambiguous [LUCENE-8]

Is it good that two exceptions in different packages has equal names ???

"SearchRAM.java": Error #: 304 : reference to ParseException is ambiguous; both
class org.apache.lucene.queryParser.ParseException in package
org.apache.lucene.queryParser and class
org.apache.lucene.analysis.standard.ParseException in package
org.apache.lucene.analysis.standard match at line 186, column 13

I don't want to write long name like here:
}catch( org.apache.lucene.queryParser.ParseException e ){

  • it's not a good style.

Thank you !


Migrated from LUCENE-8 by Serge A. Redchuk, resolved May 27 2006
Environment:

Operating System: All
Platform: PC

Internationalized search [LUCENE-43]

I would like Lucene to be able to search content in multiple languages, with
canadian French and Spanish being the top two that I am concerned with, by just
being able to specify the locale at the time of the search.


Migrated from LUCENE-43 by Andrig T. Miller, resolved May 27 2006
Environment:

Operating System: other
Platform: Other

Parse Aborted: Lexical error [LUCENE-22]

This report is for Lucene 1.2 RC4.

When indexing a 4 MB size text file using the HTMLIndexer from the demo package
I get the following error:

Parse Aborted: Lexical error at line 106377, column 64. Encountered: "=" (61),
after : "".

The line actually contains nothing more than a lot of spaces followed by word,
so nothing special.

If I delete the line, the error still occurs.
If I crop the file so that the file contains less than 106377 lines, the error
still occurs.
If I crop the file further, at a certain point the error disappears.

The error does not occur when using the IndexFiles indexer.

Michael


Migrated from LUCENE-22 by michael.suedkamp, resolved May 27 2006
Environment:

Operating System: All
Platform: PC

GermanStemFilter setting wrong values for startoffset/endoffset of stemmed tokens [LUCENE-23]

The GermanStemFilter sets wrong values to the new Token object created when the
stemmer succeeds in stemming the termText() string. Bug found in 1.2-RC5-dev


Example, for the processing of the string "this is a simple test":
token : thi (0,3)
token : is (5,7)
token : a (8,9)
token : simpl (0,5)
token : test (17,21)

(all the stemmed tokens have wrong start/end offsets).


Migrated from LUCENE-23 by Rodrigo Reyes, resolved May 27 2006
Environment:

Operating System: Linux
Platform: PC

Attachments: ASF.LICENSE.NOT.GRANTED--germanstemfilter.patch.diff

IndexOutOfBoundsException from QueryParser [LUCENE-20]

Version - 1.2 RC4

It looks like the query parser throws this exception when one of the terms is a
single character.

search expression fed to query parser: x AND test

java.lang.ArrayIndexOutOfBoundsException: -1 < 0
at java.util.Vector.elementAt(Vector.java:427)
at org.apache.lucene.queryParser.QueryParser.addClause(Unknown Source)
at org.apache.lucene.queryParser.QueryParser.Query(Unknown Source)
at org.apache.lucene.queryParser.QueryParser.parse(Unknown Source)
at org.apache.lucene.queryParser.QueryParser.parse(Unknown Source)


Migrated from LUCENE-20 by Jay Jayaprasad, resolved May 27 2006
Environment:

Operating System: other
Platform: PC

Use of BitVector [LUCENE-33]

Hi!
I'm using the class BitVector and Directory for write and read bits in a file,
but, when I compare the writed with the readed... is diferent!!!
000010000011000001 != 011000101001101100
Why????

How I can use the classes Directory and BitVector for do this?


Migrated from LUCENE-33 by Gerardo Tibaná, resolved May 27 2006
Environment:

Operating System: All
Platform: PC

new IndexReader.terms(myterm) skips over first term [LUCENE-6]

if I do

IndexReader r = IndexReader.open("index");
Term t = new Term("contents","heidegger");
TermEnum terms = r.terms(t);
out.println("zero-term: "terms.term().text()"<br>");
int cnt = 0;
while (terms.next()) {
out.println("term: "terms.term().text()"<br>");
if (cnt++>5) break;
}

then the first term I see in the main loop after terms.next() is
not "heidegger", even though this is in my index. If I query the enumerator
BEFORE calling next(), the term is there.
However, the comments in TermEnum.term() says that this method is only valid
after the first next() and all other enumerators work that way too.

The terms(Term) should give back the actual term first, just as it says it
does, right?

The enumerator skips over the first term if I search for a non-existing term
like "heidegge" as well.

This means that a PrefixQuery will not work as expected since it uses this
enumerator, right?


Migrated from LUCENE-6 by Anne Veling, resolved May 27 2006
Environment:

Operating System: All
Platform: PC

Removing a file from index does not remove all references to file. [LUCENE-40]

If I walk though a list of files and remove them from
the index, it appears as only half of the references are
really removed.

I have included the java class I use to create and update
the index below. If you have any questions, please let me
know. I can also provide the intellij plugin (with
source) that does the indexing if you want to use it
for debugging.

In the source below, if I index all the ANT manual API's, and
do a search for "Ant", I get 531 hits. After doing the remove
operation where I remove all the ant manuals docs from the
index, I get around 260 hits. On windows, I sometimes continue
to get 531 hits.

If this ends up being unreadable...just email me for the file:

/*

  • Created by IntelliJ IDEA.
  • User: rvestal
  • Date: Jun 11, 2002
  • Time: 8:52:43 PM
  • To change template for new class use
  • Code Style | Class Templates options (Tools | IDE Options).
    */
    package org.intellij.plugins.docPlugin.tools.finder;

import org.apache.lucene.analysis.standard.StandardAnalyzer;
import org.apache.lucene.index.IndexReader;
import org.apache.lucene.index.IndexWriter;
import org.apache.lucene.store.Directory;
import org.apache.lucene.store.FSDirectory;

import java.io.*;
import java.util.Iterator;
import java.util.Vector;

/**

  • Runs a javadoc indexer on a thread.
    */
    public class Indexer
    extends Thread {

    /** The initiating search pane. **/
    private IndexerTab mIndexTab;

    /** The vector of files to add to the index **/
    private Vector mFilesToAdd;

    /** The vector of files to remove from the index **/
    private Vector mFilesToRemove;

    /** The index dir **/
    private String mIndexDir;

    /**

  • Constructor

  • @param pane The initiating search pane.

  • @param filesToAdd The files to add to the index.

  • @param filesToRemove The files to remove from the index.
    */
    public Indexer( IndexerTab pane, Vector filesToAdd,
    Vector filesToRemove, String indexDir ) {
    mIndexTab = pane;
    mFilesToAdd = filesToAdd;
    mFilesToRemove = filesToRemove;
    mIndexDir = indexDir;
    }

    /**

  • If this thread was constructed using a separate

  • <code>Runnable</code> run object, then that

  • <code>Runnable</code> object's <code>run</code> method is called;

  • otherwise, this method does nothing and returns.

  • <p>

  • Subclasses of <code>Thread</code> should override this method.
    */
    public void run() {
    File indexDir = new File( mIndexDir );
    if ( !indexDir.exists() ) {
    indexDir.mkdirs();
    }

      removeFiles();
      addFiles();
    
      mIndexTab.updateIndexingProgress( mFilesToAdd.size() + 
    

mFilesToRemove.size() + 3, "" );
mIndexTab.doneIndexing();
}

/\*\*
  • Add a set of files to the index.
    */
    private void addFiles() {

      IndexWriter writer = null;
      try {
          try {
              writer = new IndexWriter( mIndexDir, new StandardAnalyzer(), 
    

false );
} catch ( FileNotFoundException ex ) {
writer = new IndexWriter( mIndexDir, new StandardAnalyzer(),
true );
}

        for ( int ix = 0; ix &lt;mFilesToAdd.size(); ix++ ) {
            File file = (File) mFilesToAdd.get( ix );
            mIndexTab.updateIndexingProgress( ix + mFilesToRemove.size(),
                                              formDisplayString( 

file.getAbsolutePath() ) );
writer.addDocument( JavadocIndexerDocument.createDocument(
file ) );
if ( mIndexTab.isIndexingCanceled() ) {
break;
}
}
} catch ( Exception ex ) {
ex.printStackTrace();
} finally {
if ( writer != null ) {
try {
writer.optimize();
writer.close();
} catch ( Exception ex ) {
ex.printStackTrace();
}
}
}
}

/**
 * remove files from the index.
 *
 * todo Investigate why this doesn't remove ALL references...seems to only 

remove half.
*/
private void removeFiles() {
if ( mFilesToRemove.size() == 0 ) {
return;
}

    Directory directory = null;
    IndexReader reader = null;
    try {
        directory = FSDirectory.getDirectory( mIndexDir, false );
        try {
            reader = IndexReader.open( directory );
        } catch ( FileNotFoundException ex ) {
            return;
        }
        removeFilesFromReader( reader );
    } catch ( Exception ex ) {
        ex.printStackTrace();
    }

    if ( reader != null ) {
        try {
            reader.close();
        } catch ( IOException e ) {
            e.printStackTrace();
        }
    }
    if ( directory != null ) {
        try {
            directory.close();
        } catch ( Exception e ) {
            e.printStackTrace();
        }
    }

}


/**
 * Remove the files from the reader.
 * `@param` reader The index reader
 * `@throws` IOException on error
 */
private void removeFilesFromReader( IndexReader reader )
    throws IOException {
    int count = 0;
    for ( Iterator iterator = mFilesToRemove.iterator(); iterator.hasNext

(); ) {
String path = ( (File) iterator.next()).getAbsolutePath();

        deleteIndiciesForPath( reader, path );

        mIndexTab.updateIndexingProgress( ++count, "Removing "
                                                 + formDisplayString( 

path ) );

        if ( mIndexTab.isIndexingCanceled() ) {
            break;
        }
    }
}


/**
 * Delete the indicies in the search stuff for a specific file.
 * `@param` reader The index reader.
 * `@param` path The path to remove from the index
 * `@throws` IOException on error
 */
private void deleteIndiciesForPath( IndexReader reader, String path )
    throws IOException {
    int numDocs = reader.numDocs();
    for ( int ix = 0; ix < numDocs; ix++ ) {
        if ( !reader.isDeleted( ix ) ) {
            String docPath = JavadocIndexerDocument.getPath( reader.document

( ix ) );
if ( docPath.indexOf( path ) != -1 ) {
reader.delete( ix );
break;
}
}
}
}

/**
 * Form a display string to pass to the progress dialog.
 * `@param` path The path to the file being indexed.
 * `@return` A display string for the path.
 */
private String formDisplayString( String path ) {
    if ( path.length() &gt; 36 ) {
        int endIndex = path.indexOf( '/', 6 );
        if ( endIndex == -1 ) {
            endIndex = path.indexOf( '

', 6 );
}
if ( endIndex != -1 ) {
String newPath = path.substring( 0, endIndex + 1 );
newPath += "...";
int lastIndex = path.lastIndexOf( '/', path.length() - 29 );
if ( lastIndex == -1 ) {
lastIndex = path.lastIndexOf( '
', path.length() - 29 );
}
if ( lastIndex < 0 ) {
lastIndex = Math.max( 6, path.length() - 29 );
}
newPath += path.substring( lastIndex, path.length() );
return newPath;
}
}
return path;
}

}


Migrated from LUCENE-40 by Rick Vestal, resolved May 27 2006
Environment:

Operating System: Linux
Platform: PC

Prefix Queries cannot be case insensitive [LUCENE-3]

Hi. I am using a cvs version of Lucene (got on 2001.10.11).

I am having the following problem: while I can achieve case insensibility of the
search engine by using the correct tokenizer (a derivative of
LowerCaseTokenizer, which passes alphanumeric characters instead of letters only
as token components), I cannot have this feature work with prefix queries.

As I am currently working on the problem myself, I can submit a solution fixing
this bug in some future.


Migrated from LUCENE-3 by Andrzej Jarmoniuk, resolved May 27 2006
Environment:

Operating System: other
Platform: Other

IndexWriter [LUCENE-17]

While the input sources are abstracted, the indices are always files in the
FileSystem. It would be nice to abstract the IndexWriter to output to other
data stores.

For example, I would like to trying to use Lucene to index and search a set of
short-lived documents while involved in a P2P environment. Ideally, this
index (which would be single merge for each peer) can reside in memory rather
than in the file system (for reasons of security as much as anything else –
I'd prefer not to require permission to write out to the user's filesystem).

I think it'd be a nice addition to Lucene. It would make the Lucene engine
more easily embedded into other apps.


Migrated from LUCENE-17 by Stanford Ng, resolved Sep 03 2005
Environment:

Operating System: All
Platform: All

QueryParser produces empty BooleanQueries when all clauses are stop words [LUCENE-25]

When I want to do the following query (example):
(fieldx : xxxxx OR fieldy : xxxxxxxx)AND(fieldx : stopword OR fieldy : stopword)
it will search (after passing the Analyzer including a StopFilter) for
(fieldx : xxxxx OR fieldy : xxxxxx) AND() and give a wrong searchresult or a
NullPointerException.


Migrated from LUCENE-25 by Puk Witte, 1 vote, resolved Aug 21 2008
Environment:

Operating System: All
Platform: PC

Attachments: ASF.LICENSE.NOT.GRANTED--patch9.txt, LUCENE-25.patch
Linked issues:

QueryParser not handling TokenMgrError [LUCENE-14]

Hi,

With lucene-1.2-rc3, when a perform a search for "*", a TokenMgrError is thrown
(see below). As a workaround, I started catching
org.apache.lucene.queryParser.TokenMgrError in my application but I believe the
correct solution is to change QueryParser.jj to start catching TokenMgrError
and throwing a ParseException instead (just don't know the best place to do
it ;-)

Best regards,

--Daniel

org.apache.lucene.queryParser.TokenMgrError: Lexical error at line 1, column
11. Encountered: <EOF> after : ""
at org.apache.lucene.queryParser.QueryParserTokenManager.getNextToken
(Unknown Source)
at org.apache.lucene.queryParser.QueryParser.jj_ntk(Unknown Source)
at org.apache.lucene.queryParser.QueryParser.Clause(Unknown Source)
at org.apache.lucene.queryParser.QueryParser.Query(Unknown Source)
at org.apache.lucene.queryParser.QueryParser.parse(Unknown Source)
at org.apache.lucene.queryParser.QueryParser.parse(Unknown Source)
at com.weblib.search.SearchServlet.doGet(Unknown Source)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:740)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:853)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter
(ApplicationFilterChain.java:247)
at org.apache.catalina.core.ApplicationFilterChain.doFilter
(ApplicationFilterChain.java:193)
at org.apache.catalina.core.StandardWrapperValve.invoke
(StandardWrapperValve.java:243)
at org.apache.catalina.core.StandardPipeline.invokeNext
(StandardPipeline.java:566)
at org.apache.catalina.core.StandardPipeline.invoke
(StandardPipeline.java:472)
at org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:943)
at org.apache.catalina.core.StandardContextValve.invoke
(StandardContextValve.java:201)
at org.apache.catalina.core.StandardPipeline.invokeNext
(StandardPipeline.java:566)
at org.apache.catalina.valves.CertificatesValve.invoke
(CertificatesValve.java:246)
at org.apache.catalina.core.StandardPipeline.invokeNext
(StandardPipeline.java:564)
at org.apache.catalina.core.StandardPipeline.invoke
(StandardPipeline.java:472)
at org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:943)
at org.apache.catalina.core.StandardContext.invoke
(StandardContext.java:2344)
at org.apache.catalina.core.StandardHostValve.invoke
(StandardHostValve.java:164)
at org.apache.catalina.core.StandardPipeline.invokeNext
(StandardPipeline.java:566)
at org.apache.catalina.valves.ErrorDispatcherValve.invoke
(ErrorDispatcherValve.java:170)
at org.apache.catalina.core.StandardPipeline.invokeNext
(StandardPipeline.java:564)
at org.apache.catalina.valves.ErrorReportValve.invoke
(ErrorReportValve.java:170)
at org.apache.catalina.core.StandardPipeline.invokeNext
(StandardPipeline.java:564)
at org.apache.catalina.valves.AccessLogValve.invoke
(AccessLogValve.java:462)
at org.apache.catalina.core.StandardPipeline.invokeNext
(StandardPipeline.java:564)
at org.apache.catalina.core.StandardPipeline.invoke
(StandardPipeline.java:472)
at org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:943)
at org.apache.catalina.core.StandardEngineValve.invoke
(StandardEngineValve.java:163)
at org.apache.catalina.core.StandardPipeline.invokeNext
(StandardPipeline.java:566)
at org.apache.catalina.core.StandardPipeline.invoke
(StandardPipeline.java:472)
at org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:943)
at org.apache.catalina.connector.http.HttpProcessor.process
(HttpProcessor.java:1011)
at org.apache.catalina.connector.http.HttpProcessor.run
(HttpProcessor.java:1106)
at java.lang.Thread.run(Thread.java:539)


Migrated from LUCENE-14 by Daniel Calvo, resolved Sep 03 2005
Environment:

Operating System: All
Platform: PC

Special Characters inside a query resolve in wrong hits [LUCENE-49]

Sample:
Index got the key "e-documentation" one time.

query: e-documentation
result: wrong; hits = 0;
Lucene think it will be a boolean operator ( e -documentation);

query: "e-documentation"
result: correct, hits = 1

query: e-documentation
result: correct, hits = 1;

query: e-doc*
result: wrong; hits = 0;
Lucene is not able to find "e-documentation"


Migrated from LUCENE-49 by Christian Pino Tossi, resolved May 27 2006
Environment:

Operating System: All
Platform: PC

Problem creating directories for FSDirectory [LUCENE-16]

There is a problem when you try to create a directory (FSDirectory) that doesn't exists and you
specified a path with a depth greater than 1.

For example: lucia/index/

The problem
lays on the method
private synchronized void create() throws IOException {
if
(!directory.exists())
directory.mkdir(); <======== HERE SHOULD BE mkdirs();

String[] files = directory.list(); // clear old files
for (int i = 0; i < files.length; i++) {

File file = new File(directory, files[i]);
if (!file.delete())
throw new
IOException("couldn't delete " + files[i]);
}
}

Regerds


Migrated from LUCENE-16 by Luis Peña, resolved May 27 2006
Environment:

Operating System: other
Platform: All

Attachments: ASF.LICENSE.NOT.GRANTED--fsdir.patch

logic error in QueryParser and in BooleanQuery ! [LUCENE-9]

I think there's "ideo-logic" error in QueryParser:
(and in BooleanQuery!)

when I search for smth. like "love OR NOT onion"
I receive the same result as I search for "love AND NOT onion".
IMHO it's wrong.

Let we have 4 docs:
doc1: "Love is life"
doc2: "Java is pretty nice language"
doc3: "C++ is powerful, but unsafe"
doc4: "Onion and love sometimes are not compatoble"

So, if search for "love OR NOT onion"
result must be: doc1, doc2, doc3.
(everything where the word "onion" isn't present, because we say "OR")

but, we have the same result as in case of search for:
"love AND NOT onion":
result: doc1.

So, I have created own parser, using BooleanQuery, that would help
me, but unfortunatelly it wouldn't.

Please ![ Fix it ASAYK ]( Fix it ASAYK )


Migrated from LUCENE-9 by Serge A. Redchuk, resolved May 27 2006
Environment:

Operating System: All
Platform: PC

build failed in GermanStemmer on platform with default encoding GBK [LUCENE-19]

I build lucene with ant1.4.1 on my Chinese version Windows which default
file.encoding is GBK. Build failed with following java code error: but
successful build on linux with default file.encoding = ISO8859_1.

[javac]

D:\java\jakarta-lucene\src\java\org\apache\lucene\analysis\de\German
Stemmer.java:162: unclosed character literal
[javac] else if ( buffer.charAt( c ) == '? ) {
[javac] ^
[javac]
D:\java\jakarta-lucene\src\java\org\apache\lucene\analysis\de\German
Stemmer.java:165: unclosed character literal
[javac] else if ( buffer.charAt( c ) == '? ) {
[javac] ^
[javac]
D:\java\jakarta-lucene\src\java\org\apache\lucene\analysis\de\German
Stemmer.java:168: unclosed character literal
[javac] else if ( buffer.charAt( c ) == '? ) {
[javac] ^
[javac]
D:\java\jakarta-lucene\src\java\org\apache\lucene\analysis\de\German
Stemmer.java:173: unclosed character literal
[javac] if ( buffer.charAt( c ) == '? ) {
[javac] ^
[javac]
D:\java\jakarta-lucene\src\java\org\apache\lucene\analysis\de\German
Stemmer.java:185: unclosed character literal
[javac] buffer.setCharAt( c, '? );
[javac] ^
[javac]
D:\java\jakarta-lucene\src\java\org\apache\lucene\analysis\de\German
Stemmer.java:209: ')' expected
[javac] }
[javac] ^
[javac]
D:\java\jakarta-lucene\src\java\org\apache\lucene\analysis\de\German
Stemmer.java:210: illegal start of expression
[javac] }
[javac] ^
[javac]
D:\java\jakarta-lucene\src\java\org\apache\lucene\analysis\de\German
Stemmer.java:264: unclosed character literal
[javac] else if ( buffer.charAt( c ) == '? ) {
[javac] ^
[javac]
D:\java\jakarta-lucene\src\java\org\apache\lucene\analysis\de\German
Stemmer.java:283: ')' expected
[javac] }
[javac] ^
[javac]
D:\java\jakarta-lucene\src\java\org\apache\lucene\analysis\de\German
Stemmer.java:284: illegal start of expression
[javac] }
[javac] ^
[javac] 11 errors

so is it possible use unicode \u#### instead of this non-ascii chars?

Che Dong


Migrated from LUCENE-19 by Che Dong, resolved May 27 2006
Environment:

Operating System: All
Platform: PC

Prefix Queries cannot be case insensitible [LUCENE-2]

Hi. I am using a cvs version of Lucene (got on 2001.10.11).

I am having the following problem: while I can achieve case insensibility of the
search engine by using the correct tokenizer (a derivative of
LowerCaseTokenizer, which passes alphanumeric characters instead of letters only
as token components), I cannot have this feature work with prefix queries.

As I am currently working on the problem myself, I can submit a solution fixing
this bug in some future.


Migrated from LUCENE-2 by Andrzej Jarmoniuk, resolved May 27 2006
Environment:

Operating System: other
Platform: Other

Lucene rc3 crashes with some (a few) phrase query searches with a NullPointerException [LUCENE-13]

Hi,

I'm running lucene-1.2-rc3 with Tomcat 4.0.1 and when performing some phrase
queries a get a NullPointerException. Among the files indexed, I have
one "JavaServer Pages(TM): A Developer's Perspective", an article by Scott
McPherson that can be found in Sun's site (an HTML doc). With a phrase query
like "javaserver pages" Lucene crashes. I was able to reproduce the problem
with a few phrase queries but most of the queries worked fine. I also tried
with lucene-1.2-rc2 and had no problem (using the same index and queries).

Here is a stack trace collected from Tomcat's log file

java.lang.NullPointerException
at org.apache.lucene.index.SegmentTermPositions.seek(Unknown Source)
at org.apache.lucene.index.SegmentTermDocs.seek(Unknown Source)
at org.apache.lucene.index.SegmentsTermDocs.termDocs(Unknown Source)
at org.apache.lucene.index.SegmentsTermDocs.next(Unknown Source)
at org.apache.lucene.search.PhrasePositions.next(Unknown Source)
at org.apache.lucene.search.PhraseScorer.score(Unknown Source)
at org.apache.lucene.search.BooleanScorer.score(Unknown Source)
at org.apache.lucene.search.IndexSearcher.search(Unknown Source)
at org.apache.lucene.search.Hits.getMoreDocs(Unknown Source)
at org.apache.lucene.search.Hits.<init>(Unknown Source)
at org.apache.lucene.search.Searcher.search(Unknown Source)
at org.apache.lucene.search.Searcher.search(Unknown Source)
at com.weblib.search.SearchServlet.doGet(Unknown Source)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:740)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:853)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter
(ApplicationFilterChain.java:247)
at org.apache.catalina.core.ApplicationFilterChain.doFilter
(ApplicationFilterChain.java:193)
at org.apache.catalina.core.StandardWrapperValve.invoke
(StandardWrapperValve.java:243)
at org.apache.catalina.core.StandardPipeline.invokeNext
(StandardPipeline.java:566)
at org.apache.catalina.core.StandardPipeline.invoke
(StandardPipeline.java:472)
at org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:943)
at org.apache.catalina.core.StandardContextValve.invoke
(StandardContextValve.java:201)
at org.apache.catalina.core.StandardPipeline.invokeNext
(StandardPipeline.java:566)
at org.apache.catalina.valves.CertificatesValve.invoke
(CertificatesValve.java:246)
at org.apache.catalina.core.StandardPipeline.invokeNext
(StandardPipeline.java:564)
at org.apache.catalina.core.StandardPipeline.invoke
(StandardPipeline.java:472)
at org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:943)
at org.apache.catalina.core.StandardContext.invoke
(StandardContext.java:2344)
at org.apache.catalina.core.StandardHostValve.invoke
(StandardHostValve.java:164)
at org.apache.catalina.core.StandardPipeline.invokeNext
(StandardPipeline.java:566)
at org.apache.catalina.valves.ErrorDispatcherValve.invoke
(ErrorDispatcherValve.java:170)
at org.apache.catalina.core.StandardPipeline.invokeNext
(StandardPipeline.java:564)
at org.apache.catalina.valves.ErrorReportValve.invoke
(ErrorReportValve.java:170)
at org.apache.catalina.core.StandardPipeline.invokeNext
(StandardPipeline.java:564)
at org.apache.catalina.valves.AccessLogValve.invoke
(AccessLogValve.java:462)
at org.apache.catalina.core.StandardPipeline.invokeNext
(StandardPipeline.java:564)
at org.apache.catalina.core.StandardPipeline.invoke
(StandardPipeline.java:472)
at org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:943)
at org.apache.catalina.core.StandardEngineValve.invoke
(StandardEngineValve.java:163)
at org.apache.catalina.core.StandardPipeline.invokeNext
(StandardPipeline.java:566)
at org.apache.catalina.core.StandardPipeline.invoke
(StandardPipeline.java:472)
at org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:943)
at org.apache.catalina.connector.http.HttpProcessor.process
(HttpProcessor.java:1011)
at org.apache.catalina.connector.http.HttpProcessor.run
(HttpProcessor.java:1106)
at java.lang.Thread.run(Thread.java:539)


Migrated from LUCENE-13 by Daniel Calvo, resolved Sep 03 2005
Environment:

Operating System: All
Platform: PC

Error parsing search strings that start with "*" [LUCENE-39]

...when running QueryParser.parse().

Stack trace is attached. Sorry if this bug is a duplicate.

I'm actually running 1.2rc4.

Thank you!

org.apache.lucene.queryParser.TokenMgrError: Lexical error at line 1, column

  1. Encountered: "*" (42), after : ""
    at org.apache.lucene.queryParser.QueryParserTokenManager.getNextToken
    (Unknown Source)
    at org.apache.lucene.queryParser.QueryParser.jj_ntk(Unknown Source)
    at org.apache.lucene.queryParser.QueryParser.Modifiers(Unknown Source)
    at org.apache.lucene.queryParser.QueryParser.Query(Unknown Source)
    at org.apache.lucene.queryParser.QueryParser.parse(Unknown Source)
    at org.apache.lucene.queryParser.QueryParser.parse(Unknown Source)
    at com.bg.websearch.WebSearcher.search(WebSearcher.java:145)
    at search._0002fsearch_0002fprocess_0002ejspprocess_jsp_1._jspService
    (0002fsearch_0002fprocess_0002ejspprocess
    jsp_1.java:97)
    at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:119)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:853)
    at org.apache.jasper.servlet.JspServlet$JspServletWrapper.service
    (JspServlet.java:177)
    at org.apache.jasper.servlet.JspServlet.serviceJspFile
    (JspServlet.java:318)
    at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:391)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:853)
    at org.apache.tomcat.core.ServletWrapper.doService
    (ServletWrapper.java:404)
    at org.apache.tomcat.core.Handler.service(Handler.java:286)
    at org.apache.tomcat.core.ServletWrapper.service
    (ServletWrapper.java:372)
    at org.apache.tomcat.core.ContextManager.internalService
    (ContextManager.java:797)
    at org.apache.tomcat.core.ContextManager.service
    (ContextManager.java:743)
    at
    org.apache.tomcat.service.http.HttpConnectionHandler.processConnection
    (HttpConnectionHandler.java:210)
    at org.apache.tomcat.service.TcpWorkerThread.runIt
    (PoolTcpEndpoint.java:416)
    at org.apache.tomcat.util.ThreadPool$ControlRunnable.run
    (ThreadPool.java:498)
    at java.lang.Thread.run(Thread.java:484)

Migrated from LUCENE-39 by Michael Mendelson, resolved May 27 2006
Environment:

Operating System: Linux
Platform: PC

OR returning 0 hits when one clause returns 0 hits [LUCENE-21]

Version - 1.2 RC4

When a query with an OR contains a clause that results in 0, lucene returns 0
hits even if the other clause returned matches. For example, the search
expression:
freedent OR trident
returns 0 matches. However just searching on freedent returns several matches.
Searching on trident returns 0 matches.


Migrated from LUCENE-21 by Jay Jayaprasad, resolved May 27 2006
Environment:

Operating System: other
Platform: PC

ACL's on search content [LUCENE-45]

I would like Lucene to be able to be able to limit its search to specific
content by arbitrary information, such as user and company information. We need
this kind of capability for our e-commerce website, in that we have company and
user rules that would prohibit specific users from being able to see our entire
catalog.


Migrated from LUCENE-45 by Andrig T. Miller, resolved Sep 03 2005
Environment:

Operating System: other
Platform: Other

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.