In-Memory Data Grid
Ignite Data Grid is a distributed key-value store that enables storing data both in memory and on disk within distributed clusters and provides extensive key-value APIs. Ignite Data Grid can be viewed as a distributed partitioned hash map with every cluster node owning a portion of the overall data. This way the more cluster nodes we add, the more data we can cache.
Ignite Data Grid has been built from the ground up to linearly scale to hundreds of nodes with strong semantics for data locality and affinity data routing to reduce redundant data noise.
Ignite Data Grid is lightning fast and is one of the fastest implementations of transactional or atomic data in distributed clusters today. We know it because we constantly benchmark it ourselves.
Data along with indexes can be persisted in Ignite Native Persistence or in a 3rd party database such as RDBMS, NoSQL or Hadoop. If a 3rd party database is used then Ignite can significantly accelerate performance by storing a full copy of data in memory. Learn more when to use one type of persistence over another.
Ignite ignite = Ignition.ignite(); // Get an instance of named cache. final IgniteCache<Integer, String> cache = ignite.cache("cacheName"); // Store keys in cache. for (int i = 0; i < 10; i++) cache.put(i, Integer.toString(i)); // Retrieve values from cache. for (int i = 0; i < 10; i++) System.out.println("Got [key=" + i + ", val=" + cache.get(i) + ']'); // Remove objects from cache. for (int i = 0; i < 10; i++) cache.remove(i); // Atomic put-if-absent. cache.putIfAbsent(1, "1"); // Atomic replace. cache.replace(1, "1", "2");
Ignite ignite = Ignition.ignite(); // Clone every object we get from cache, so we can freely update it. IgniteCache<Integer, Account> cache = ignite.cache("cacheName"); try (IgniteTx tx = Ignition.ignite().transactions().txStart()) { Account acct = cache.get(acctId); assert acct != null; // Deposit $20 into account. acct.setBalance(acct.getBalance() + 20); // Store updated account in cache. cache.put(acctId, acct); tx.commit(); }
Ignite ignite = Ignition.ignite(); // Get an instance of named cache. final GridCache<String, Integer> cache = ignite.cache("cacheName"); // Lock cache key "Hello". Lock lock = cache.lock("Hello"); lock.lock(); try { cache.put("Hello", 11); cache.put("World", 22); } finally { lock.unlock(); }
IgniteCache<Long, Person> cache = ignite.cache("mycache"); SqlFieldsQuery sql = new SqlFieldsQuery( "select concat(firstName, ' ', lastName) from Person"); // Select concatinated first and last name for all persons. try (QueryCursor<List<?>> cursor = cache.query(sql)) { for (List<?> row : cursor) System.out.println("Full name: " + row.get(0)); }
IgniteCache<Long, Person> personCache = ignite.cache("personCache"); // Select with join between Person and Organization to // get the names of all the employees of a specific organization. SqlFieldsQuery sql = new SqlFieldsQuery( "select p.name " + "from Person p, \"orgCache\".Organization o where " + "p.orgId = o.id " + "and o.name = ?"); // Execute the query and obtain the query result cursor. try (QueryCursor<List<?>> cursor = personCache.query(sql.setArgs("Ignite"))) { for (List<?> row : cursor) System.out.println("Person name=" + row); }
IgniteCache<Long, Person> personCache = ignite.cache("personCache"); // Select average age of people working within different departments. SqlFieldsQuery sql = new SqlFieldsQuery( "select avg(p.age) as avg_age, d.name as dpmt_name, o.name as org_name " + "from Person p, \"depCache\".Department d, \"orgCache\".Organization o " + "where p.depid = d.id and d.orgid = o.id " + "group by d.name, o.name " + "order by avg_age"; // Execute the query and obtain the query result cursor. try (QueryCursor<List<?>> cursor = personCache.query(sql.setArgs("Ignite"))) { for (List<?> row : cursor) System.out.println("Average age by department and organization: " + row); }
Also see data grid examples available on GitHub.
Data Grid Features
Feature | Description |
---|---|
Key-Value Store |
Ignite data grid is an Unlike other key-value stores, Ignite determines data locality using a pluggable hashing algorithm. Every client can determine which node a key belongs to by plugging it into a hashing function, without a need for any special mapping servers or name nodes. |
JCache (JSR 107) |
Ignite is a 100% compliant implementation of JCache (JSR 107) specification. JCache provides a very simple to use, yet very powerful API for data caching. Some of the JCache API features include:
|
Partitioning & Replication |
Depending on the configuration, Ignite can either partition or replicate
data in memory. Unlike Ignite also allows to configure multiple backup copies to guarantee data resiliency in case of node failures. |
Collocated Processing |
Ignite allows executing any native Java, C++, and .NET/C# code directly on the server-side, close to the data, in collocated fashion. |
Self-Healing Cluster |
Ignite cluster can self-heal, where clients automatically reconnect in case of failures, slow clients are automatically kicked out, and data from failed nodes is automatically propagated to other nodes in the grid. |
Client-side Near Caches |
Near cache is local client-side cache that stores the most recently and most frequently accessed data. |
Durable Memory |
Apache Ignite |
Ignite Native Persistence |
|
Off-Heap Indexes |
Ignite stores query indexes off-heap memory. For every unique index that is declared in an SQL schema, Apache Ignite instantiates and manages a dedicated B+ tree instance. |
Binary Protocol |
Apache Ignite stores data in caches as
|
ACID Transactions |
Ignite provides fully ACID compliant distributed transactions that ensure guaranteed consistency.
Ignite supports Ignite transactions utilize 2PC protocol with many one-phase-commit optimizations whenever applicable. |
Deadlock-Free Transactions |
Ignite supports deadlock-free, optimistic transactions, which do not acquire any locks, and free users from worrying about the lock order. Such transactions also provide much better performance. |
Transactional Entry Processor |
Ignite transactional entry processor allows executing collocated user logic on the server side within a transaction. |
Cross-Partition Transactions |
In Ignite, transactions can be performed on all partitions of a cache across the whole cluster. |
Locks |
Ignite allows developers to define explicit locks enforcing mutual exclusion on cached objects. |
Continuous Queries |
Continuous queries are useful for cases when you want to execute a query and then continue to get notified about the data changes that fall into your query filter. |
Database Integration |
Ignite can automatically integrate with external databases - RDBMS, NoSQL, and HDFS. |
Write-Through |
Write-Through mode allows updating the data in the database. |
Read-Through |
Read-Through mode allows reading the data from the database. |
Write-Behind Caching |
Ignite provides an option to asynchronously perform updates to the database via Write-Behind Caching. |
Automatic Persistence |
Automatically connect to the underlying database and generate XML OR-mapping configuration and Java domain model POJOs. |
Web Session Clustering |
Ignite data grid is capable of caching web sessions of all Java Servlet containers that follow Java Servlet 3.0 Specification, including Apache Tomcat, Eclipse Jetty, Oracle WebLogic, and others. Web sessions caching becomes useful when running a cluster of app servers to improve performance and scalability of the servlet container. |
Hibernate L2 Caching |
Ignite data grid can be used as |
Spring Caching |
Ignite provides Spring-annotation-based way to enable caching for Java methods so that the result of a method execution is stored in the Ignite cache. If later the same method is called with the same set of parameters, the result will be retrieved from the cache instead of actually executing the method. |
Spring Data |
Apache Ignite implements Spring Data |
XA/JTA |
Ignite can be configured with a Java Transaction API (JTA) transaction manager lookup class. |
OSGI Support |