Key-Value In-Memory Data Grid

Ignite provides extensive and rich key-value APIs and can act as an in-memory data grid. You can think of Ignite as of a distributed partitioned hash map with every cluster node owning a portion of the overall data set. Unlike other in-memory data grids (IMDG), Ignite enables storing data both, in memory and on disk, and therefore is able to store more data than can fit in the physical memory.

Ignite data grid is one of the fastest implementations of ACID transactions or atomic data updates in distributed clusters today. We know it because we constantly benchmark it ourselves.

3rd Party Databases

Inite in-memory data grid can improve performance and scalability of existing 3rd party databases, like RDBMS, NoSQL, or Hadoop-based storages, by sliding in as a distribute cache between the application and database layers. This approach does not require rip-and-replace of the existing data, and will automatically write-through or read-through all the updates or reads to or from the underlying database. Ignite will automatically merge with the underlying database transactions, providing transparent transactional behavior to the users.

However, this approach also has its limitations. For example, SQL or scan queries will only include the results stored in memory, and not in the external database, since Ignite cannot index the external data. If you require that data on disk should be indexed and accessible via SQL queries, we recommend that you look at Ignite native persistence.

JCache APIs

Ignite key-value APIs comply with JCache (JSR 107) specification that supports the following:

  • In-Memory Key Value Store
  • Basic Cache Operations
  • ConcurrentMap APIs
  • Collocated Processing (EntryProcessor)
  • Events and Metrics
  • Pluggable Persistence
Extended Key-Value APIs

In addition to the standard JCache API, Ignites supports distributed ACID transactions, scan and continuous queries, collocated processing and more.

The data grid has been built from the ground up to linearly scale to hundreds of nodes with strong semantics for data locality and affinity data routing to reduce redundant data noise. It can be viewed as a distributed partitioned hash map with every cluster node owning a portion of the overall data. This way the more cluster nodes we add, the more data we can cache.

Code Examples
                            Ignite ignite = Ignition.ignite();

                            // Get an instance of named cache.
                            final IgniteCache<Integer, String> cache = ignite.cache("cacheName");

                            // Store keys in cache.
                            for (int i = 0; i < 10; i++)
                                cache.put(i, Integer.toString(i));

                            // Retrieve values from cache.
                            for (int i = 0; i < 10; i++)
                                System.out.println("Got [key=" + i + ", val=" + cache.get(i) + ']');

                            // Remove objects from cache.
                            for (int i = 0; i < 10; i++)
                                cache.remove(i);

                            // Atomic put-if-absent.
                            cache.putIfAbsent(1, "1");

                            // Atomic replace.
                            cache.replace(1, "1", "2");
                        
                            Ignite ignite = Ignition.ignite();

                            // Clone every object we get from cache, so we can freely update it.
                            IgniteCache<Integer, Account> cache = ignite.cache("cacheName");

                            try (IgniteTx tx = Ignition.ignite().transactions().txStart()) {
                                Account acct = cache.get(acctId);

                                assert acct != null;

                                // Deposit $20 into account.
                                acct.setBalance(acct.getBalance() + 20);

                                // Store updated account in cache.
                                cache.put(acctId, acct);

                                tx.commit();
                            }
                        
                            Ignite ignite = Ignition.ignite();

                            // Get an instance of named cache.
                            final GridCache<String, Integer> cache = ignite.cache("cacheName");

                            // Lock cache key "Hello".
                            Lock lock = cache.lock("Hello");

                            lock.lock();

                            try {
                                cache.put("Hello", 11);
                                cache.put("World", 22);
                            }
                            finally {
                                lock.unlock();
                            }
                        
                            IgniteCache<Long, Person> cache = ignite.cache("mycache");

                            SqlFieldsQuery sql = new SqlFieldsQuery(
                              "select concat(firstName, ' ', lastName) from Person");

                            // Select concatinated first and last name for all persons.
                            try (QueryCursor<List<?>> cursor = cache.query(sql)) {
                              for (List<?> row : cursor)
                                System.out.println("Full name: " + row.get(0));
                            }
                        
                            IgniteCache<Long, Person> personCache = ignite.cache("personCache");

                            // Select with join between Person and Organization to
                            // get the names of all the employees of a specific organization.
                            SqlFieldsQuery sql = new SqlFieldsQuery(
                                "select p.name  "
                                    + "from Person p, \"orgCache\".Organization o where "
                                    + "p.orgId = o.id "
                                    + "and o.name = ?");

                            // Execute the query and obtain the query result cursor.
                            try (QueryCursor<List<?>> cursor =  personCache.query(sql.setArgs("Ignite"))) {
                                for (List<?> row : cursor)
                                    System.out.println("Person name=" + row);
                            }
                        
                            IgniteCache<Long, Person> personCache = ignite.cache("personCache");

                            // Select average age of people working within different departments.
                            SqlFieldsQuery sql = new SqlFieldsQuery(
                                "select avg(p.age) as avg_age, d.name as dpmt_name, o.name as org_name "
                                    + "from Person p, \"depCache\".Department d, \"orgCache\".Organization o "
                                    + "where p.depid = d.id and d.orgid = o.id "
                                    + "group by d.name, o.name "
                                    + "order by avg_age";

                            // Execute the query and obtain the query result cursor.
                            try (QueryCursor<List<?>> cursor =  personCache.query(sql.setArgs("Ignite"))) {
                                for (List<?> row : cursor)
                                    System.out.println("Average age by department and organization: " + row);
                            }
                        

More on Data Grid

Feature Description
Key-Value Store

Ignite data grid is a key-value store which can store data both, in-memory and on-disk. It can be viewed as a distributed partitioned hash map, with every cluster node owning a portion of the overall data. This way the more cluster nodes we add, the more data we can store:

Durable Memory

Ignite Durable Memory allows storing and processing data and indexes both, in memory and on disk. The in-memory data, including indexes, is always stored and managed off-heap, completely removing any type of Garbage Collection overhead.

JCache (JSR 107)

Ignite is a 100% compliant implementation of JCache (JSR 107) specification. JCache provides a very simple to use, yet very powerful API for data caching:

Memory-Centric Storage

Apache Ignite is based on distributed memory-centric architecture that combines the performance and scale of in-memory computing together with the disk durability and strong consistency in one system:

Collocated Processing

Ignite allows executing any native Java, C++, and .NET/C# code directly on the server-side, close to the data, in collocated fashion:

Client-side Near Caches

Near cache is local client-side cache that stores the most recently and most frequently accessed data.

ACID Transactions

Ignite provides fully ACID compliant distributed transactions that ensure guaranteed consistency.

Deadlock-Free Transactions

Ignite supports deadlock-free, optimistic transactions, which do not acquire any locks, and free users from worrying about the lock order. Such transactions also provide much better performance:

Transactional Entry Processor

Ignite transactional entry processor allows executing collocated user logic on the server side within a transaction:

Cross-Partition Transactions

In Ignite, transactions can be performed on all partitions of a cache across the whole cluster:

Locks

Ignite allows developers to define explicit locks enforcing mutual exclusion on cached objects:

Continuous Queries

Continuous queries are useful for cases when you want to execute a query and then continue to get notified about the data changes that fall into your query filter:

Write-Through

Write-Through mode allows updating the data in the database.

Read-Through

Read-Through mode allows reading the data from the database.

Write-Behind Caching

Ignite provides an option to asynchronously perform updates to the database via Write-Behind Caching.

Hibernate L2 Caching

Ignite data grid can be used as Hibernate Second-Level Cache (or L2 cache), which can significantly speed-up the persistence layer of your application.

Spring Caching

Ignite provides Spring-annotation-based way to enable caching for Java methods so that the result of a method execution is stored in the Ignite cache. If later the same method is called with the same set of parameters, the result will be retrieved from the cache instead of actually executing the method.

Spring Data

Apache Ignite implements Spring Data CrudRepository interface that not only supports basic CRUD operations but also provides access to the Apache Ignite SQL capabilities via the unified Spring Data API.

OSGI Support