SlideShare ist ein Scribd-Unternehmen logo
1 von 10
M.C. Kang
   Redis Overview
Redis is an extremely high-performance, lightweight data store.
It provides key/value data access to persistent byte arrays, lists, sets, and hash data structures.
It supports atomic counters and also has an efficient topic-based pub/sub messaging
functionality.
Redis is simple to install and run and is, above all, very, very fast at data access.
What it lacks in complex querying functionality (like that found in Riak or MongoDB), it
makes up for in speed and efficiency.
Redis servers can also be clustered together to provide for very flexible deployment.
It’s easy to interact with Redis from the command line using the redis-cli binary that comes
with the installation.
   ConnectionFactory
@Configuration
public class ApplicationConfig {
      private static final StringRedisSerializer STRING_SERIALIZER = new
      StringRedisSerializer();

     @Bean
     public JedisConnectionFactory connectionFactory() {
           JedisConnectionFactory connectionFactory = new JedisConnectionFactory();
           connectionFactory.setHostName("localhost");
           connectionFactory.setPort(6379);
           return connectionFactory;
     }

     @Bean
     public RedisTemplate<String, Long> longTemplate() {
           RedisTemplate<String, Long> tmpl = new RedisTemplate<String, Long>();
           tmpl.setConnectionFactory(connFac);
           tmpl.setKeySerializer(STRING_SERIALIZER);
           tmpl.setValueSerializer(LongSerializer.INSTANCE);
           return tmpl;
     }
}
                                                                         Val.
                                        Key Type
                                                                         Type
   RedisTemplate
Since the feature set of Redis is really too large to effectively encapsulate into a single
class, the various
operations on data are split up into separate Operations classes as follows

• ValueOperations
• ListOperations
• SetOperations
• ZSetOperations
• HashOperations
• BoundValueOperations
• BoundListOperations
• BoundSetOperations
• BoundZSetOperations
• BoundHashOperations
   Object Conversion
Because Redis deals directly with byte arrays and doesn’t natively perform Object to byte[] translation, the
Spring Data Redis project provides some helper classes to make it easier to read and write
data from Java code.
By default, all keys and values are stored as serialized Java objects.

public enum LongSerializer implements RedisSerializer<Long> {
      INSTANCE;
      @Override
      public byte[] serialize(Long aLong) throws SerializationException {
            if (null != aLong) {
                       return aLong.toString().getBytes();
            } else {
                       return new byte[0];
            }
      }
      @Override
      public Long deserialize(byte[] bytes) throws SerializationException {
            if (bytes.length > 0) {
                       return Long.parseLong(new String(bytes));
            } else {
                       return null;
            }
      }
}
   Automatic type conversion when setting and getting values
public class ProductCountTracker {
     @Autowired
     RedisTemplate<String, Long> redis;


     public void updateTotalProductCount(Product p) {
           // Use a namespaced Redis key
           String productCountKey = "product-counts:" + p.getId();
           // Get the helper for getting and setting values
           ValueOperations<String, Long> values = redis.opsForValue();
           // Initialize the count if not present
           values.setIfAbsent(productCountKey, 0L);
           // Increment the value by 1
           Long totalOfProductInAllCarts = values.increment(productCountKey, 1);
     }
}
   Using the HashOperations interface
private static final RedisSerializer<String> STRING_SERIALIZER = new StringRedisSerializer();
public void updateTotalProductCount(Product p) {
     RedisTemplate tmpl = new RedisTemplate();
     tmpl.setConnectionFactory(connectionFactory);
     // Use the standard String serializer for all keys and values
     tmpl.setKeySerializer(STRING_SERIALIZER);
     tmpl.setHashKeySerializer(STRING_SERIALIZER);
     tmpl.setHashValueSerializer(STRING_SERIALIZER);
     HashOperations<String, String, String> hashOps = tmpl.opsForHash();
     // Access the attributes for the Product
     String productAttrsKey = "products:attrs:" + p.getId();
     Map<String, String> attrs = new HashMap<String, String>();
     // Fill attributes
     attrs.put("name", "iPad");
     attrs.put("deviceType", "tablet");
     attrs.put("color", "black");
     attrs.put("price", "499.00");
     hashOps.putAll(productAttrsKey, attrs);
}
   Using Atomic Counters
public class CountTracker {
     @Autowired
     RedisConnectionFactory connectionFactory;
     public void updateProductCount(Product p) {
           // Use a namespaced Redis key
           String productCountKey = "product-counts:" + p.getId();
           // Create a distributed counter.
           // Initialize it to zero if it doesn't yet exist
           RedisAtomicLong productCount =
           new RedisAtomicLong(productCountKey, connectionFactory, 0);
           // Increment the count
           Long newVal = productCount.incrementAndGet();
     }
}
   Pub/Sub Functionality
Important benefit of using Redis is the simple and fast publish/subscribe functionality.
Although it doesn’t have the advanced features of a full-blown message broker, Redis’ pub/sub
capability can be used to create a lightweight and flexible event bus.
Spring Data Redis exposes a couple of helper classes that make working with this
functionality extremely easy.

Following the pattern of the JMS MessageListenerAdapter, Spring Data Redis has a
MessageListenerAdapter abstraction that works in basically the same way

@Bean
public MessageListener dumpToConsoleListener() {
       return new MessageListener() {
              @Override
              public void onMessage(Message message, byte[] pattern) {
                            System.out.println("FROM MESSAGE: " + new String(message.getBody()));
              }
       };
}

@Bean
MessageListenerAdapter beanMessageListener() {
       MessageListenerAdapter listener = new MessageListenerAdapter( new BeanMessageListener());
       listener.setSerializer( new BeanMessageSerializer() );
       return listener;
}

@Bean
RedisMessageListenerContainer container() {
       RedisMessageListenerContainer container = new RedisMessageListenerContainer();
       container.setConnectionFactory(redisConnectionFactory());
       // Assign our BeanMessageListener to a specific channel
       container.addMessageListener(beanMessageListener(),new ChannelTopic("spring-data-book:pubsub-test:dump"));
       return container;
}
    Spring’s Cache Abstraction with Redis
Spring 3.1 introduced a common and reusable caching abstraction. This makes it easy to cache the
results of method calls in your POJOs without having to explicitly manage the process of
checking for the existence of a cache entry, loading new ones, and expiring old cache entries.

Spring Data Redis supports this generic caching abstraction with the
o.s.data.redis.cache.RedisCacheManager.
To designate Redis as the backend for using the caching annotations in Spring, you just need
to define a RedisCacheManager bean in your ApplicationContext. Then annotate your POJOs like
you normally would, with @Cacheable on methods you want cached.
@Configuration
@EnableCaching
public class CachingConfig extends ApplicationConfig {
…
}


@Bean
public RedisCacheManager redisCacheManager() {
       RedisTemplate tmpl = new RedisTemplate();
       tmpl.setConnectionFactory( redisConnectionFactory() );
       tmpl.setKeySerializer( IntSerializer.INSTANCE );
       tmpl.setValueSerializer( new JdkSerializationRedisSerializer() );
       RedisCacheManager cacheMgr = new RedisCacheManager( tmpl );
       return cacheMgr;
}

 @Cacheable(value = "greetings")
 public String getCacheableValue() {
    long now = System.currentTimeMillis();
    return "Hello World (@ " + now + ")!";
 }

Weitere ähnliche Inhalte

Was ist angesagt?

Map-Reduce and Apache Hadoop
Map-Reduce and Apache HadoopMap-Reduce and Apache Hadoop
Map-Reduce and Apache HadoopSvetlin Nakov
 
That’s My App - Running in Your Background - Draining Your Battery
That’s My App - Running in Your Background - Draining Your BatteryThat’s My App - Running in Your Background - Draining Your Battery
That’s My App - Running in Your Background - Draining Your BatteryMichael Galpin
 
Session06 handling xml data
Session06  handling xml dataSession06  handling xml data
Session06 handling xml datakendyhuu
 
Data Processing with Cascading Java API on Apache Hadoop
Data Processing with Cascading Java API on Apache HadoopData Processing with Cascading Java API on Apache Hadoop
Data Processing with Cascading Java API on Apache HadoopHikmat Dhamee
 
Change tracking
Change trackingChange tracking
Change trackingSonny56
 
Functions & closures
Functions & closuresFunctions & closures
Functions & closuresKnoldus Inc.
 
Ian 2014.10.24 weekly report
Ian 2014.10.24 weekly reportIan 2014.10.24 weekly report
Ian 2014.10.24 weekly reportLearningTech
 
Sanjar Akhmedov - Joining Infinity – Windowless Stream Processing with Flink
Sanjar Akhmedov - Joining Infinity – Windowless Stream Processing with FlinkSanjar Akhmedov - Joining Infinity – Windowless Stream Processing with Flink
Sanjar Akhmedov - Joining Infinity – Windowless Stream Processing with FlinkFlink Forward
 
Talk KVO with rac by Philippe Converset
Talk KVO with rac by Philippe ConversetTalk KVO with rac by Philippe Converset
Talk KVO with rac by Philippe ConversetCocoaHeads France
 
Xml processing in scala
Xml processing in scalaXml processing in scala
Xml processing in scalaKnoldus Inc.
 
Cassandra Community Webinar | Become a Super Modeler
Cassandra Community Webinar | Become a Super ModelerCassandra Community Webinar | Become a Super Modeler
Cassandra Community Webinar | Become a Super ModelerDataStax
 
Building .NET Apps using Couchbase Lite
Building .NET Apps using Couchbase LiteBuilding .NET Apps using Couchbase Lite
Building .NET Apps using Couchbase Litegramana
 
Accumulo Summit 2015: Reactive programming in Accumulo: The Observable WAL [I...
Accumulo Summit 2015: Reactive programming in Accumulo: The Observable WAL [I...Accumulo Summit 2015: Reactive programming in Accumulo: The Observable WAL [I...
Accumulo Summit 2015: Reactive programming in Accumulo: The Observable WAL [I...Accumulo Summit
 
Durable functions
Durable functionsDurable functions
Durable functions명신 김
 
MongoDB - Aggregation Pipeline
MongoDB - Aggregation PipelineMongoDB - Aggregation Pipeline
MongoDB - Aggregation PipelineJason Terpko
 

Was ist angesagt? (20)

Map-Reduce and Apache Hadoop
Map-Reduce and Apache HadoopMap-Reduce and Apache Hadoop
Map-Reduce and Apache Hadoop
 
Database c# connetion
Database c# connetionDatabase c# connetion
Database c# connetion
 
That’s My App - Running in Your Background - Draining Your Battery
That’s My App - Running in Your Background - Draining Your BatteryThat’s My App - Running in Your Background - Draining Your Battery
That’s My App - Running in Your Background - Draining Your Battery
 
Rxjs vienna
Rxjs viennaRxjs vienna
Rxjs vienna
 
Lab2-DB-Mongodb
Lab2-DB-MongodbLab2-DB-Mongodb
Lab2-DB-Mongodb
 
Session06 handling xml data
Session06  handling xml dataSession06  handling xml data
Session06 handling xml data
 
Data Processing with Cascading Java API on Apache Hadoop
Data Processing with Cascading Java API on Apache HadoopData Processing with Cascading Java API on Apache Hadoop
Data Processing with Cascading Java API on Apache Hadoop
 
Change tracking
Change trackingChange tracking
Change tracking
 
Rxjs marble-testing
Rxjs marble-testingRxjs marble-testing
Rxjs marble-testing
 
Functions & closures
Functions & closuresFunctions & closures
Functions & closures
 
Bootstrap
BootstrapBootstrap
Bootstrap
 
Ian 2014.10.24 weekly report
Ian 2014.10.24 weekly reportIan 2014.10.24 weekly report
Ian 2014.10.24 weekly report
 
Sanjar Akhmedov - Joining Infinity – Windowless Stream Processing with Flink
Sanjar Akhmedov - Joining Infinity – Windowless Stream Processing with FlinkSanjar Akhmedov - Joining Infinity – Windowless Stream Processing with Flink
Sanjar Akhmedov - Joining Infinity – Windowless Stream Processing with Flink
 
Talk KVO with rac by Philippe Converset
Talk KVO with rac by Philippe ConversetTalk KVO with rac by Philippe Converset
Talk KVO with rac by Philippe Converset
 
Xml processing in scala
Xml processing in scalaXml processing in scala
Xml processing in scala
 
Cassandra Community Webinar | Become a Super Modeler
Cassandra Community Webinar | Become a Super ModelerCassandra Community Webinar | Become a Super Modeler
Cassandra Community Webinar | Become a Super Modeler
 
Building .NET Apps using Couchbase Lite
Building .NET Apps using Couchbase LiteBuilding .NET Apps using Couchbase Lite
Building .NET Apps using Couchbase Lite
 
Accumulo Summit 2015: Reactive programming in Accumulo: The Observable WAL [I...
Accumulo Summit 2015: Reactive programming in Accumulo: The Observable WAL [I...Accumulo Summit 2015: Reactive programming in Accumulo: The Observable WAL [I...
Accumulo Summit 2015: Reactive programming in Accumulo: The Observable WAL [I...
 
Durable functions
Durable functionsDurable functions
Durable functions
 
MongoDB - Aggregation Pipeline
MongoDB - Aggregation PipelineMongoDB - Aggregation Pipeline
MongoDB - Aggregation Pipeline
 

Andere mochten auch

Cloudfront private distribution 개요
Cloudfront private distribution 개요Cloudfront private distribution 개요
Cloudfront private distribution 개요명철 강
 
Managementcontrol Brandweer Steller André Maranus.2
Managementcontrol Brandweer Steller André Maranus.2Managementcontrol Brandweer Steller André Maranus.2
Managementcontrol Brandweer Steller André Maranus.2awmaranus
 
Spring data iii
Spring data iiiSpring data iii
Spring data iii명철 강
 
Class loader basic
Class loader basicClass loader basic
Class loader basic명철 강
 
The Outcome Economy
The Outcome EconomyThe Outcome Economy
The Outcome EconomyHelge Tennø
 

Andere mochten auch (6)

Cloudfront private distribution 개요
Cloudfront private distribution 개요Cloudfront private distribution 개요
Cloudfront private distribution 개요
 
Managementcontrol Brandweer Steller André Maranus.2
Managementcontrol Brandweer Steller André Maranus.2Managementcontrol Brandweer Steller André Maranus.2
Managementcontrol Brandweer Steller André Maranus.2
 
Spring data iii
Spring data iiiSpring data iii
Spring data iii
 
Spring data
Spring dataSpring data
Spring data
 
Class loader basic
Class loader basicClass loader basic
Class loader basic
 
The Outcome Economy
The Outcome EconomyThe Outcome Economy
The Outcome Economy
 

Ähnlich wie Spring data ii

Hadoop Integration in Cassandra
Hadoop Integration in CassandraHadoop Integration in Cassandra
Hadoop Integration in CassandraJairam Chandar
 
Local data storage for mobile apps
Local data storage for mobile appsLocal data storage for mobile apps
Local data storage for mobile appsIvano Malavolta
 
An introduction to Test Driven Development on MapReduce
An introduction to Test Driven Development on MapReduceAn introduction to Test Driven Development on MapReduce
An introduction to Test Driven Development on MapReduceAnanth PackkilDurai
 
Spark Cassandra Connector: Past, Present, and Future
Spark Cassandra Connector: Past, Present, and FutureSpark Cassandra Connector: Past, Present, and Future
Spark Cassandra Connector: Past, Present, and FutureRussell Spitzer
 
DataStax: Spark Cassandra Connector - Past, Present and Future
DataStax: Spark Cassandra Connector - Past, Present and FutureDataStax: Spark Cassandra Connector - Past, Present and Future
DataStax: Spark Cassandra Connector - Past, Present and FutureDataStax Academy
 
Accessing data with android cursors
Accessing data with android cursorsAccessing data with android cursors
Accessing data with android cursorsinfo_zybotech
 
Accessing data with android cursors
Accessing data with android cursorsAccessing data with android cursors
Accessing data with android cursorsinfo_zybotech
 
1 MVC – Ajax and Modal Views AJAX stands for Asynch.docx
1  MVC – Ajax and Modal Views AJAX stands for Asynch.docx1  MVC – Ajax and Modal Views AJAX stands for Asynch.docx
1 MVC – Ajax and Modal Views AJAX stands for Asynch.docxhoney725342
 
Sql Summit Clr, Service Broker And Xml
Sql Summit   Clr, Service Broker And XmlSql Summit   Clr, Service Broker And Xml
Sql Summit Clr, Service Broker And XmlDavid Truxall
 
Local storage in Web apps
Local storage in Web appsLocal storage in Web apps
Local storage in Web appsIvano Malavolta
 
ASP.NET Session 11 12
ASP.NET Session 11 12ASP.NET Session 11 12
ASP.NET Session 11 12Sisir Ghosh
 
Divide and Conquer – Microservices with Node.js
Divide and Conquer – Microservices with Node.jsDivide and Conquer – Microservices with Node.js
Divide and Conquer – Microservices with Node.jsSebastian Springer
 
ASP.Net 5 and C# 6
ASP.Net 5 and C# 6ASP.Net 5 and C# 6
ASP.Net 5 and C# 6Andy Butland
 
NoSQL Endgame DevoxxUA Conference 2020
NoSQL Endgame DevoxxUA Conference 2020NoSQL Endgame DevoxxUA Conference 2020
NoSQL Endgame DevoxxUA Conference 2020Thodoris Bais
 

Ähnlich wie Spring data ii (20)

Hadoop Integration in Cassandra
Hadoop Integration in CassandraHadoop Integration in Cassandra
Hadoop Integration in Cassandra
 
Sqlapi0.1
Sqlapi0.1Sqlapi0.1
Sqlapi0.1
 
Local data storage for mobile apps
Local data storage for mobile appsLocal data storage for mobile apps
Local data storage for mobile apps
 
An introduction to Test Driven Development on MapReduce
An introduction to Test Driven Development on MapReduceAn introduction to Test Driven Development on MapReduce
An introduction to Test Driven Development on MapReduce
 
Session 2- day 3
Session 2- day 3Session 2- day 3
Session 2- day 3
 
spring-tutorial
spring-tutorialspring-tutorial
spring-tutorial
 
Spark Cassandra Connector: Past, Present, and Future
Spark Cassandra Connector: Past, Present, and FutureSpark Cassandra Connector: Past, Present, and Future
Spark Cassandra Connector: Past, Present, and Future
 
DataStax: Spark Cassandra Connector - Past, Present and Future
DataStax: Spark Cassandra Connector - Past, Present and FutureDataStax: Spark Cassandra Connector - Past, Present and Future
DataStax: Spark Cassandra Connector - Past, Present and Future
 
Accessing data with android cursors
Accessing data with android cursorsAccessing data with android cursors
Accessing data with android cursors
 
Accessing data with android cursors
Accessing data with android cursorsAccessing data with android cursors
Accessing data with android cursors
 
1 MVC – Ajax and Modal Views AJAX stands for Asynch.docx
1  MVC – Ajax and Modal Views AJAX stands for Asynch.docx1  MVC – Ajax and Modal Views AJAX stands for Asynch.docx
1 MVC – Ajax and Modal Views AJAX stands for Asynch.docx
 
Sql Summit Clr, Service Broker And Xml
Sql Summit   Clr, Service Broker And XmlSql Summit   Clr, Service Broker And Xml
Sql Summit Clr, Service Broker And Xml
 
Local storage in Web apps
Local storage in Web appsLocal storage in Web apps
Local storage in Web apps
 
ASP.NET Session 11 12
ASP.NET Session 11 12ASP.NET Session 11 12
ASP.NET Session 11 12
 
Divide and Conquer – Microservices with Node.js
Divide and Conquer – Microservices with Node.jsDivide and Conquer – Microservices with Node.js
Divide and Conquer – Microservices with Node.js
 
Mysqlppt
MysqlpptMysqlppt
Mysqlppt
 
ASP.Net 5 and C# 6
ASP.Net 5 and C# 6ASP.Net 5 and C# 6
ASP.Net 5 and C# 6
 
Mysqlppt
MysqlpptMysqlppt
Mysqlppt
 
Angular Schematics
Angular SchematicsAngular Schematics
Angular Schematics
 
NoSQL Endgame DevoxxUA Conference 2020
NoSQL Endgame DevoxxUA Conference 2020NoSQL Endgame DevoxxUA Conference 2020
NoSQL Endgame DevoxxUA Conference 2020
 

Spring data ii

  • 2. Redis Overview Redis is an extremely high-performance, lightweight data store. It provides key/value data access to persistent byte arrays, lists, sets, and hash data structures. It supports atomic counters and also has an efficient topic-based pub/sub messaging functionality. Redis is simple to install and run and is, above all, very, very fast at data access. What it lacks in complex querying functionality (like that found in Riak or MongoDB), it makes up for in speed and efficiency. Redis servers can also be clustered together to provide for very flexible deployment. It’s easy to interact with Redis from the command line using the redis-cli binary that comes with the installation.
  • 3. ConnectionFactory @Configuration public class ApplicationConfig { private static final StringRedisSerializer STRING_SERIALIZER = new StringRedisSerializer(); @Bean public JedisConnectionFactory connectionFactory() { JedisConnectionFactory connectionFactory = new JedisConnectionFactory(); connectionFactory.setHostName("localhost"); connectionFactory.setPort(6379); return connectionFactory; } @Bean public RedisTemplate<String, Long> longTemplate() { RedisTemplate<String, Long> tmpl = new RedisTemplate<String, Long>(); tmpl.setConnectionFactory(connFac); tmpl.setKeySerializer(STRING_SERIALIZER); tmpl.setValueSerializer(LongSerializer.INSTANCE); return tmpl; } } Val. Key Type Type
  • 4. RedisTemplate Since the feature set of Redis is really too large to effectively encapsulate into a single class, the various operations on data are split up into separate Operations classes as follows • ValueOperations • ListOperations • SetOperations • ZSetOperations • HashOperations • BoundValueOperations • BoundListOperations • BoundSetOperations • BoundZSetOperations • BoundHashOperations
  • 5. Object Conversion Because Redis deals directly with byte arrays and doesn’t natively perform Object to byte[] translation, the Spring Data Redis project provides some helper classes to make it easier to read and write data from Java code. By default, all keys and values are stored as serialized Java objects. public enum LongSerializer implements RedisSerializer<Long> { INSTANCE; @Override public byte[] serialize(Long aLong) throws SerializationException { if (null != aLong) { return aLong.toString().getBytes(); } else { return new byte[0]; } } @Override public Long deserialize(byte[] bytes) throws SerializationException { if (bytes.length > 0) { return Long.parseLong(new String(bytes)); } else { return null; } } }
  • 6. Automatic type conversion when setting and getting values public class ProductCountTracker { @Autowired RedisTemplate<String, Long> redis; public void updateTotalProductCount(Product p) { // Use a namespaced Redis key String productCountKey = "product-counts:" + p.getId(); // Get the helper for getting and setting values ValueOperations<String, Long> values = redis.opsForValue(); // Initialize the count if not present values.setIfAbsent(productCountKey, 0L); // Increment the value by 1 Long totalOfProductInAllCarts = values.increment(productCountKey, 1); } }
  • 7. Using the HashOperations interface private static final RedisSerializer<String> STRING_SERIALIZER = new StringRedisSerializer(); public void updateTotalProductCount(Product p) { RedisTemplate tmpl = new RedisTemplate(); tmpl.setConnectionFactory(connectionFactory); // Use the standard String serializer for all keys and values tmpl.setKeySerializer(STRING_SERIALIZER); tmpl.setHashKeySerializer(STRING_SERIALIZER); tmpl.setHashValueSerializer(STRING_SERIALIZER); HashOperations<String, String, String> hashOps = tmpl.opsForHash(); // Access the attributes for the Product String productAttrsKey = "products:attrs:" + p.getId(); Map<String, String> attrs = new HashMap<String, String>(); // Fill attributes attrs.put("name", "iPad"); attrs.put("deviceType", "tablet"); attrs.put("color", "black"); attrs.put("price", "499.00"); hashOps.putAll(productAttrsKey, attrs); }
  • 8. Using Atomic Counters public class CountTracker { @Autowired RedisConnectionFactory connectionFactory; public void updateProductCount(Product p) { // Use a namespaced Redis key String productCountKey = "product-counts:" + p.getId(); // Create a distributed counter. // Initialize it to zero if it doesn't yet exist RedisAtomicLong productCount = new RedisAtomicLong(productCountKey, connectionFactory, 0); // Increment the count Long newVal = productCount.incrementAndGet(); } }
  • 9. Pub/Sub Functionality Important benefit of using Redis is the simple and fast publish/subscribe functionality. Although it doesn’t have the advanced features of a full-blown message broker, Redis’ pub/sub capability can be used to create a lightweight and flexible event bus. Spring Data Redis exposes a couple of helper classes that make working with this functionality extremely easy. Following the pattern of the JMS MessageListenerAdapter, Spring Data Redis has a MessageListenerAdapter abstraction that works in basically the same way @Bean public MessageListener dumpToConsoleListener() { return new MessageListener() { @Override public void onMessage(Message message, byte[] pattern) { System.out.println("FROM MESSAGE: " + new String(message.getBody())); } }; } @Bean MessageListenerAdapter beanMessageListener() { MessageListenerAdapter listener = new MessageListenerAdapter( new BeanMessageListener()); listener.setSerializer( new BeanMessageSerializer() ); return listener; } @Bean RedisMessageListenerContainer container() { RedisMessageListenerContainer container = new RedisMessageListenerContainer(); container.setConnectionFactory(redisConnectionFactory()); // Assign our BeanMessageListener to a specific channel container.addMessageListener(beanMessageListener(),new ChannelTopic("spring-data-book:pubsub-test:dump")); return container; }
  • 10. Spring’s Cache Abstraction with Redis Spring 3.1 introduced a common and reusable caching abstraction. This makes it easy to cache the results of method calls in your POJOs without having to explicitly manage the process of checking for the existence of a cache entry, loading new ones, and expiring old cache entries. Spring Data Redis supports this generic caching abstraction with the o.s.data.redis.cache.RedisCacheManager. To designate Redis as the backend for using the caching annotations in Spring, you just need to define a RedisCacheManager bean in your ApplicationContext. Then annotate your POJOs like you normally would, with @Cacheable on methods you want cached. @Configuration @EnableCaching public class CachingConfig extends ApplicationConfig { … } @Bean public RedisCacheManager redisCacheManager() { RedisTemplate tmpl = new RedisTemplate(); tmpl.setConnectionFactory( redisConnectionFactory() ); tmpl.setKeySerializer( IntSerializer.INSTANCE ); tmpl.setValueSerializer( new JdkSerializationRedisSerializer() ); RedisCacheManager cacheMgr = new RedisCacheManager( tmpl ); return cacheMgr; } @Cacheable(value = "greetings") public String getCacheableValue() { long now = System.currentTimeMillis(); return "Hello World (@ " + now + ")!"; }