Hebrew Search with ElasticSearch and HebMorph

HebMorph, ElasticSearch, IR, Lucene

Comments

11 min read

Hebrew search is not an easy task, and HebMorph is a project I started several years ago to address that problem. After a certain period of inactivity I'm back actively working on it. I'm also happy to say there are already several live systems using it to enable Hebrew searches in their applications.

This post is a short step-by-step guide on how to use HebMorph in an ElasticSearch installation. There are quite a few configuration options and things to consider when enabling Hebrew search, most are in the realm of performance vs relevance trade-offs, but I'll talk about those in a separate post.

0. What exactly is HebMorph

HebMorph is a project a bit wider than just providing a Hebrew search plugin for ElasticSearch, but for the purpose of this post let us treat it in that narrow aspect.

HebMorph has 3 main parts - the hspell dictionary files, the hebmorph-core package which is a wrapper around the dictionary files with important bits that allow for locating words even if they weren't written exactly as they appear in the dictionary, and the hebmorph-lucene package which contains various tools for processing streams of text into Lucene tokens - the searchable parts.

To enable Hebrew search from ElasticSearch we are going to need to use the Hebrew analyzer class HebMorph provides to analyze incoming Hebrew texts. That is done by providing ElasticSearch with the HebMorph packages and then telling it to use the Hebrew analyzer on text fields as needed.

1. Get HebMorph and hspell

At the moment you will have to compile HebMorph from sources yourself using Maven. In the future we might upload it to a centralized repository, but since we still actively working on a lot of stuff there it is still a bit too early for that.

Probably the easiest way to get HebMorph is to do git clone from the main repository. The repository is located at https://github.com/synhershko/HebMorph and includes the latest hspell files already under /hspell-data-files. If you are new to git GitHub offers great tutorials for getting started with it, and they also enable you to download the entire source tree as a zip or a tarball.

Once you have the sources, run mvn package or mvn install to create 2 jars - hebmorph-core and hebmorph-lucene. Those 2 packages are required before moving on to the next step.

2. Create an ElasticSearch plugin

In this step we will create a new plugin which we will use in the next step to create the Hebrew analyzers in. If you already have a plugin you wish to use, skip to the next step.

ElasticSearch plugins are compiled Java packages you simply drop to the plugins folder of your ElasticSearch installation and it gets detected automatically by the ElasticSearch instance once it is initialized. If you are new to this, you might want to read up a bit on that in the official ElasticSearch documentation. Here is a great guide to start with: http://jfarrell.github.io/

The gist of this is having a Java project with a es-plugin.properties file embedded as a resource and pointing to class that tells ElasticSearch what classes to load as plugins, and their plugin type. In the next section we will use this to add our own Analyzer implementation which makes use of HebMorph's capabilities.

3. Creating an Hebrew Analyzer

HebMorph already comes with MorphAnalyzer - an Analyzer implementation which takes care of Hebrew-aware tokenization, lemmatization and whatnot. Because it is highly configurable, personally I prefer re-implementing it in the ElasticSearch plugin so it is easier to change the configurations in code. In case you wondered, I'm not planning in supporting external configurations for this as it is too subtle and you should really know what you are doing there.

Don't forget to add dependencies to hebmorph-core and hebmorph-lucene to your project.

My common Analyzer setup for Hebrew search looks like this:

public abstract class HebrewAnalyzer extends ReusableAnalyzerBase {

    protected enum AnalyzerType {
        INDEXING, QUERY, EXACT
    }

    private static final DictRadix<Integer> prefixesTree = LingInfo.buildPrefixTree(false);
    private static DictRadix<MorphData> dictRadix;
    private final StreamLemmatizer lemmatizer;
    private final LemmaFilterBase lemmaFilter;

    protected final Version matchVersion;
    protected final AnalyzerType analyzerType;
    protected final char originalTermSuffix = '$';

    static {
        try {
            dictRadix = Loader.loadDictionaryFromHSpellData(new File(resourcesPath + "hspell-data-files"), true);
        } catch (IOException e) {
            // TODO log
        }
    }

    protected HebrewAnalyzer(final AnalyzerType analyzerType) throws IOException {
        this.matchVersion = matchVersion;
        this.analyzerType = analyzerType;
        lemmatizer = new StreamLemmatizer(null, dictRadix, prefixesTree, null);
        lemmaFilter = new BasicLemmaFilter();
    }

    @Override
    protected TokenStreamComponents createComponents(final String fieldName, final Reader reader) {
        // on query - if marked as keyword don't keep origin, else only lemmatized (don't suffix)
        // if word termintates with $ will output word$, else will output all lemmas or word$ if OOV
        if (analyzerType == AnalyzerType.QUERY) {
            final StreamLemmasFilter src = new StreamLemmasFilter(reader, lemmatizer, null, lemmaFilter);
            src.setAlwaysSaveMarkedOriginal(true);
            src.setSuffixForExactMatch(originalTermSuffix);

            TokenStream tok = new SuffixKeywordFilter(src, '$');
            return new TokenStreamComponents(src, tok);
        }

        if (analyzerType == AnalyzerType.EXACT) {
            // on exact - we don't care about suffixes at all, we always output original word with suffix only
            final HebrewTokenizer src = new HebrewTokenizer(reader, prefixesTree, null);
            TokenStream tok = new NiqqudFilter(src);
            tok = new LowerCaseFilter(matchVersion, tok);
            tok = new AlwaysAddSuffixFilter(tok, '$', false);
            return new TokenStreamComponents(src, tok);
        }

        // on indexing we should always keep both the stem and marked original word
        // will ignore $ && will always output all lemmas + origin word$
        // basically, if analyzerType == AnalyzerType.INDEXING)
        final StreamLemmasFilter src = new StreamLemmasFilter(reader, lemmatizer, null, lemmaFilter);
        src.setAlwaysSaveMarkedOriginal(true);


        TokenStream tok = new SuffixKeywordFilter(src, '$');
        return new TokenStreamComponents(src, tok);
    }


    public static class HebrewIndexingAnalyzer extends HebrewAnalyzer {
        public HebrewIndexingAnalyzer() throws IOException {
  &bbsp;         super(AnalyzerType.INDEXING);
        }
    }

    public static class HebrewQueryAnalyzer extends HebrewAnalyzer {
        public HebrewQueryAnalyzer() throws IOException {
            super(AnalyzerType.QUERY);
        }
    }

    public static class HebrewExactAnalyzer extends HebrewAnalyzer {
        public HebrewExactAnalyzer() throws IOException {
            super(AnalyzerType.EXACT);
        }
    }
}

You may notice how I created 3 separate analyzers - one for indexing, one for querying and the last for exact querying. I'll be talking more about this in future posts, but the idea is to be able to provide flexibility on querying while still allow for correct indexing.

Configuring the analyzers to be picked up from ElasticSearch is rather easy now. First, you need to wrap each analyzer in a "provider", like so:

public class HebrewQueryAnalyzerProvider extends AbstractIndexAnalyzerProvider<HebrewAnalyzer.HebrewQueryAnalyzer> {
private final HebrewAnalyzer.HebrewQueryAnalyzer hebrewAnalyzer;

@Inject
public HebrewQueryAnalyzerProvider(Index index, @IndexSettings Settings indexSettings, Environment env, @Assisted String name, @Assisted Settings settings) throws IOException {
super(index, indexSettings, name, settings);
hebrewAnalyzer = new HebrewAnalyzer.HebrewQueryAnalyzer();
}

@Override
public HebrewAnalyzer.HebrewQueryAnalyzer get() {
return hebrewAnalyzer;
}
}

After you've created such providers for all types of analyzers, create an AnalysisBinderProcessor like this (or update your existing one with definitions for the Hebrew analyzers):

public class MyAnalysisBinderProcessor extends AnalysisModule.AnalysisBinderProcessor {

    private final static HashMap<String, Class<? extends AnalyzerProvider>> languageAnalyzers = new HashMap<>();
    static {
        languageAnalyzers.put("hebrew", HebrewIndexingAnalyzerProvider.class);
        languageAnalyzers.put("hebrew_query", HebrewQueryAnalyzerProvider.class);
        languageAnalyzers.put("hebrew_exact", HebrewExactAnalyzerProvider.class);
    }

    public static boolean analyzerExists(final String analyzerName) {
        return languageAnalyzers.containsKey(analyzerName);
    }

    @Override
    public void processAnalyzers(final AnalyzersBindings analyzersBindings) {
        for (Map.Entry<String, Class<? extends AnalyzerProvider>> entry : languageAnalyzers.entrySet()) {
            analyzersBindings.processAnalyzer(entry.getKey(), entry.getValue());
        }
    }
}

Don't forget to update your Plugin class to catch the AnalysisBinderProcessor - it should look something like this (plus any other stuff you want to add there):

public class MyPlugin extends AbstractPlugin {
    @Override
    public String name() {
        return "my-plugin";
    }

    @Override
    public String description() {
        return "Implements custom actions required by me";
    }

    @Override
    public void processModule(Module module) {
        if (module instanceof AnalysisModule) {
            ((AnalysisModule)module).addProcessor(new MyAnalysisBinderProcessor());
        }
    }

}

4. Using the Hebrew analyzers

Compile the ElasticSearch plugin and drop it along with its dependencies in a folder under the /plugins folder of ElasticSearch. You now have 3 new types of analyzers at your disposal: "hebrew", "hebrew_query" and "hebrew_exact".

For indexing, you want to use the "hebrew" analyzer. In your mapping, you can define a certain field or an entire set of fields to use that specific analyzer by setting the analyzer for that field. You can also leave the analyzer configuration blank, and specify the analyzer to use for those fields with unspecified analyzer using the _analyzer field in the index request. See more about both here and here.

The "hebrew" analyzer will expand each term to all recognized lemmas; in case the word wasn't recognized it will try to tolerate spelling errors or missing Yud/Vav - most of the time it will be successful (with some rate of false positives, which the lemma-filters should remove to some degree). Some words will still remain unrecognized and thus will be indexed as-is.

When querying using a QueryString query you can specify what analyzer to use - use the "hebrew_query" or "hebrew_exact" analyzer. The former will perform lemma expansion similar to the indexing analyzer, and the latter will avoid that and allow you to perform exact matches (useful when searching for names or exact phrases).

I pretty much ignored a lot of the complexity involved in fine tuning searches for Hebrew, and many very cool things HebMorph allows you to do with Hebrew search for the sake of focus. I will revisit them in a later blog post.

5. Administration

The hspell dictionary files are looked up by a physical location on disk - you will need to provide a path they are saved at. Since dictionaries update, it is sometimes easier to update them that way in a distributed environment like the one I'm working with. It may be desirable to have them compiled within the same jar file as the code itself - I'll be happy to accept a pull request to do that.

The code above is working with ElasticSearch 0.90 GA and Lucene 4.2.1. I also had it running on earlier versions of both technologies, but may had to make a few minor changes. I assume the samples would break on future versions and I'll probably don't have much time going back and keeping it up to date, but bear in mind most of the time the changes are minor and easy to understand and make by yourself.

Both HebMorph and the hspell dictionary are released under the AGPL3. For any questions on licensing, feel free to contact me.


Comments

  • Boris Modylevsky

    Hello, Itamar! Thank you for the great post and for your session on .Net users group. Where can I found more details about the Hebrew tokenization? For example how ה is treated in the beginning of the words?

Comments are now closed