This document provides an overview of advanced Hibernate concepts including:
- The persistence lifecycle of objects in Hibernate and the four states objects can be in: transient, persistent, detached, and removed.
- The persistence context and how it acts as a cache and guarantees object identity within a session scope.
- Transaction management in Hibernate including demarcating transaction boundaries programmatically or declaratively.
- Additional topics covered include the persistence manager, concurrency control, caching, and batch processing.
6. Persistence Lifecycle
The object in hibernate has four states:
1. Transient:
Transient instance means it isn’t associated with any database
row.
Transient instance state is lost as soon as they’re no longer
referenced.
Instances that are referenced only by other transient instances
are transient.
2. Persistent:
An entity instance with a database identity.
Persistent instances are always associated with a persistence
context.
7. Persistence Lifecycle
3. Detached:
Detached instance is associated with a database row.
Detached instance are not associated with a persistence
context.=
4. Removed:
Removed instances are scheduled for deletion, but are still
managed by the persistence context.
10. Persistence Context
Each session has one internal persistence context.
Persistence context can be considered as a cache of
persistent instanced.
Persistence context is useful for:
Automatic Dirty Check
First-level cache
Guarantee a scope of java object identity
11. Persistence Context – Automatic Dirty Check
Automatic Dirty Check:
Changes made to persistent objects aren’t immediately
propagated to database (write-behind).
To keep lock-times as short as possible.
At synchronization time, Hibernate propagates only
modified persistent instances (Dirty Objects).
Hibernate has a strategy to detect which persistent
objects have been modified.
By default, all the columns of a table are included in the
SQL UPDATE statement.
12. Persistence Context – Automatic Dirty Check
Automatic Dirty Check:
Hibernate has the ability to detect exactly which
properties have been modified.
To enable dynamic update/insert:
<class name="BlogItem" table="BLOG_ITEMS"
dynamic-update="true" dynamic-insert="true" >
</class>
It is recommended to use this setting when you have an
extraordinary large number of columns (more than 50 columns)
13. Persistence Context – First Level Cache
First-Level Cache:
Hibernate session is a cache of its persistent instances.
Hibernate session remembers all its persistent instances.
Persistence context has a reference to the persistent
object and its snapshot.
Snapshot is used for dirty checking.
It is always on and you can’t turn it off.
Benefits:
Significant performance enhancement
Protects your application from stack overflows, in the case of
circular references.
Changes made in a particular context are always visible to all
other code executed inside that context.
14. Persistence Context – Object Identity
Object Identity and Equality:
Persistent object has two identifiers:
Java Identifier: x==y is true, if x and y have the same java
identifier
Database Identifier: x.getId().equals(y.getId()) is
true, if x and y represents the same database row.
The scope of object identity: is the scope under which the java
identity is guaranteed to be equivalent to the database identity.
No Identity Scope: if a row is accessed twice, then two different java
instances will be returned.
Session-scoped identity: if a row is accessed twice within a
session, then one java instance will be returned.
Process-scoped identity: one instance represents the row in the
whole JVM.
Hibernate implements session –scoped identity.
15. Persistence Context – Object Identity
Session session1 = sessionFactory.openSession();
Transaction tx1 = session1.beginTransaction();
Item a = (Item) session1.get(Item.class, new Long(1234) );
Item b = (Item) session1.get(Item.class, new Long(1234) );
( a==b ) // True, persistent a and b are identical
(a.getId().equals(b.getId())) // True, match the same record
tx1.commit();session1.close();
// References a and b are now to an object in detached state
Session session2 = sessionFactory.openSession();
Transaction tx2 = session2.beginTransaction();
Item c = (Item) session2.get(Item.class, new Long(1234) );
( a==c ) // False, detached a and persistent c are not
identical
(a.getId().equals(c.getId())) // True, match the same record
tx2.commit();session2.close();
16. Persistence Context – Object Identity
Do not treat detached objects as identical in memory.
Consider a, b and c are detached objects, what is the size of
the Set collection?
session2.close();
Set allObjects = new HashSet();
allObjects.add(a);
allObjects.add(b);
allObjects.add(c);
If the equals() method is implemented well, then the set size is
one.
Otherwise,
The size is one, if a, b and c objects are fetched using the same
session.
The size is more than one, if a, b and c objects are fetched using
different sessions.
17. Persistence Context – Object Equality
To prevent the previous conflict, you need to supply
your own implementation of the equals() and
hashcode():
Equality Implementations:
Database Identifier Equality (Primary Key)
All Properties Equality (Except the primary key)
Business Key Equality
18. Persistence Context – Object Equality
Database Identifier Equality:
Can’t be used for transient objects
For example:
public boolean equals(Object o) {
if (this==o) return true;
if (id==null) return false;
if ( !(o instanceof User) ) return false;
final User that = (User) o;
return this.id.equals( that.getId() );
}
19. Persistence Context – Object Equality
All Properties Equality
Instances for this row from different Sessions are no longer equal
if one is modified; (For example user changed his password)
For example:
public boolean equals(Object o) {
if (this==o) return true;
if ( !(o instanceof User) ) return false;
final User that = (User) o;
if (!this.getUsername().equals(that.getUsername()))
return false;
if( !this.getPassword().equals(that.getPassword()))
return false;
return true;
}
20. Persistence Context – Object Equality
Business Key Equality:
Hints to identify a business key:
Attribute that would identify our instance in the real world.
Immutable attributes (rarely updated attributes)
Every attribute has a UNIQUE database constraint.
Any date or time-based attribute, for example the creation
datetime
22. Persistence Manager
Persistence Manager exposed by many interfaces:
Session, Query, Criteria and Transaction
Persistence Manager provide services for the
following:
Basic CRUD operations
Query Execution
Control of Transactions
Management of the Persistence Context
29. Persistence Manager - Reattaching
Reattaching a modified detached instance
item.setDescription(...); // Loaded in previous Session
Session sessionTwo = sessionFactory.openSession();
Transaction tx = sessionTwo.beginTransaction();
sessionTwo.update(item); // re-attach the object as dirty
//sessionTwo.delete(item); //re-attach the object for deletion
item.setEndDate(...);
tx.commit();sessionTwo.close();
select-before-update=“true”
30. Persistence Manager - Reattaching
Reattaching an unmodified detached object:
Session s2 = sessionFactory.openSession();
Transaction tx = s2.beginTransaction();
s2.buildLockRequest(LockOptions.NONE).lock(item);// Re-attach as
a clean object
item.setDescription(...);
item.setEndDate(...);
tx.commit();
s2.close();
31. Persistence Manager - Merging
Merging the state of a detached object
The following will cause an exception
NonUniqueObjectException
Error Message: persistent instance with the same database
identifier is already associated with the Session.
detachedItem.getId(); // The database identity is "1234"
detachedItem.setDescription(...);
// New Session is created
Session s = sessionFactory.openSession();
Transaction tx = s.beginTransaction();
Item item2 = (Item) s.get(Item.class, new Long(1234));
session.update(detachedItem); // Throws exception!
tx.commit();session.close();
32. Persistence Manager - Merging
Hibernate provide merge() to solve this issue
item.getId() // The database identity is "1234"
item.setDescription(...);
// Create new session
Session s= sessionFactory.openSession();
Transaction tx = s.beginTransaction();
Item item2 = (Item) s.get(Item.class, new Long(1234));
Item item3 = (Item) s.merge(item);
(item == item2) // False
(item == item3) // False
(item2 == item3) // True
return item3;
tx.commit();session.close();
34. Persistence Manager – Manage Persistence
Context – Control First Level Cache
Controlling the persistence context cache (first-level
cache)
Persistence context has a reference for the persistent
object and its snapshot
Persistence context cache never shrinks automatically.
This may lead to OutOfMemoryException.
You can detach persistent objects manually using
Session.evict(object): to detach specific object
Session.clear(): to detach all persistent objects
You can disable dirty checking using:
Session.setReadOnly(object, true): Persistence context will no
longer maintain the snapshot.
35. Persistence Manager – Manage Persistence
Context - Flushing
Flushing the persistence context
Changes made to persistent object are not immediately
propagated to database (Write-behind)
Collect many changes into a minimal number of database
requests.
Shorter lock durations inside the database
Take advantage of the JDBC Batch
Flushing occurs at the following times:
When a transaction is committed
Before a query is executed, only if the results of the query will
be affected by the changes.
Call session.flush() explicitly
36. Persistence Manager – Manage Persistence
Context - Flushing
You can control the default behavior of hibernate
flushing by calling session.setFlushMode(FlushMode
mode):
FlushMode.AUTO: this is the default behavior
FlushMode.COMMIT
FlushMode.MANUAL
38. Transaction Management – Essentials
Transaction Attributes:
Atomicity: if one step fails, the whole unit of work must
fail.
Consistency: left data in a clean and consistent state after
the transactions complete.
Isolation: multiple users can work concurrently without
compromising the data correctness.
Durability: once the transaction completes, all changes
become persistent.
39. Transaction Management – Transaction
Demarcation
Transaction Demarcation
Defines the transaction boundaries (the start and the end
points).
There is no way to send an SQL statement outside a
transaction boundaries.
Transactions have to be short to reduce resources
allocation.
40. Transaction Management – Transaction
Boundaries
Transaction boundaries can be set either programmatically
or declaratively.
Programmatically using:
JDBC API: to handle transactions on one resource
JTA (requires Application Server): to handle transactions on many
resources
Declaratively using CMT (requires EJB container)
41. Transaction Management – Transactions in
Hibernate
Hibernate Transactions:
Represented by org.hibernate.Transaction
interface
Hibernate transaction is unified:
It works in a non-managed plain JDBC environment
It works in an application server with JTA
Hibernate transaction integrates with persistence context
For example, session is flushed automatically when you commit
Hibernate doesn’t roll back in-memory changes to
persistent objects
42. Transaction Management – Transactions in
Hibernate
Hibernate Transactions over JDBC API
Session session = null;
Transaction tx = null;
try {
session = sFactory.openSession(); //session is lazy
// a conn will obtained only when the trxn begins
tx = session.beginTransaction(); // conn.setAutoCommit(false)
concludeAuction(session);
tx.commit(); // flush and commit (here conn is released)
// here you can begin another trxn with the same session
} catch (RuntimeException ex) {
tx.rollback();
} finally {session.close();} // release all other resources
43. Transaction Management – Transactions in
Hibernate
Hibernate Transactions over JTA
To enable JTA transactions:
hibernate.transaction.factory_class = org.
hibernate.transaction.JTATransactionFactory
hibernate.transaction.manager_lookup_class =
org.hibernate.transaction.WebSphereTransactionManagerLookup
The previous example will work without code changes.
44. Transaction Management – Transactions in
Hibernate
Hibernate Transactions over JTA with Multiple Resources
Use JTA interfaces directly
UserTransaction utx = (UserTransaction) new InitialContext()
.lookup("java:comp/UserTransaction");
Session session1 = null;
Session session2 = null;
try {
utx.begin();
session1 = auctionDatabase.getCurrentSession();
session2 = billingDatabase.getCurrentSession();
concludeAuction(session1);
billAuction(session2);
session1.flush(); // it‟s your responsibility to sync with db
session2.flush();utx.commit();
} catch (RuntimeException ex) {utx.rollback();
} finally {session1.close();session2.close();}
45. Transaction Management – Transactions in
Hibernate
Hibernate can sync with the database automatically
before JTA transaction end using
hibernate.transaction.flush_before_complet
ion
Hibernate can close session automatically after JTA
transaction end using:
hibernate.transaction.auto_close_session
46. Transaction Management – Transactions in
Hibernate
UserTransaction utx = (UserTransaction) new InitialContext()
.lookup("java:comp/UserTransaction");
Session session1 = null;
Session session2 = null;
try {
utx.begin();
session1 = auctionDatabase.getCurrentSession();
session2 = billingDatabase.getCurrentSession();
concludeAuction(session1);
billAuction(session2);
utx.commit(); // session1 and session2 are flushed and closed automatically
} catch (RuntimeException ex) {
utx.rollback();
}
47. Transaction Management – Transactions in
Hibernate
It is always recommended to use JTA interfaces
directly
It supports transactions handling over multiple resources.
Hibernate can automatically bind the “current” session to
the current JTA transaction
You can separate the transaction demarcation code from data
access code; for example:
48. Transaction Management – Transactions in
Hibernate
Without direct use for JTA interface
public void sessionPerOperationBizMethod() {
myDAO1.getObj(); //uses openSession() and open new trxn
myDAO2.saveObj();//uses openSsession() and open new trxn}
With direct use for JTA interface
public void sessionPerRequestBizMethod() {
jtaTrxn.begin();
myDAO1.getObj();// uses getCurrentSession() to get new session
myDAO2.saveObj();//uses getCurrentSession() to get same session
jtaTrxn.commit;}
Without direct use for JTA, we need to open a new session and a
new transaction for each database operation.
With direct use for JTA, one transaction and one session are
used to handle the entire request.
49. Transaction Management – Transactions in
Hibernate
To enable CMT with hibernate set the following
configuration:
hibernate.transaction.factory_class = org.
hibernate.transaction.CMTTransactionFactory
51. Concurrency - Essentials
Transaction isolation ensures the concurrent
transactions work without affecting data integrity.
Traditionally, this has been implemented using locks.
Application inherits the isolation guarantees provided
by the DBMS
Hibernate never locks anything in memory
Your responsibility is to understand the database isolation
and how to change the isolation behavior
52. Concurrency – Isolation Issues
Transaction Isolation Issues
1. Lost Update:
Occurs if two transactions update the same row and then the
second transaction aborts, causing both changes to be lost.
This occurs in systems that don’t implement locking.
The concurrent transactions are not isolated.
53. Concurrency – Isolation Issues
2. Dirty Read:
Occurs if a transaction reads changes made by another
transaction that has not been committed.
The changes of the other transaction may later be rolled back.
Impact: Reading dirty data (not committed data)
54. Concurrency – Isolation Issues
3. Unrepeatable Reads:
Occurs if a transaction reads a row twice and reads different
state each time.
This situation occurs when another transaction updating the row
and committing between the two reads.
55. Concurrency – Isolation Issues
4. Second Lost Update:
This is a special case of Unrepeatable Reads
Occurs if two concurrent transactions read a row; the first one
writes to it and commits, and then the second one writes to it
and commits.
Impact: The changes made by the first transaction are lost
Tx B COMMIT
COMMIT
Tx A
56. Concurrency – Isolation Issues
5. Phantom Read:
Occurs when a transaction executes a query twice, the second
result set includes rows that weren’t visible in the first result set.
This situation occurs when another transaction inserting or
deleting rows and committing between the execution of the two
queries
57. Concurrency – Isolation Levels
Isolation Levels:
1. Read Uncommitted:
This level permits dirty read but doesn’t permit lost updates
One transaction can not write to a row if another uncommitted
transaction has already written to it.
58. Concurrency – Isolation Levels
2. Read Committed:
This level permits unrepeatable reads but doesn’t permit dirty
reads and doesn’t permit lost updates.
Reading a transactions don’t block other transactions from
accessing a row (reading and writing).
Uncommitted writing transaction blocks all other transactions
from accessing the row.
59. Concurrency – Isolation Levels
3. Repeatable Read:
This isolation level doesn’t permit: unrepeatable read, dirty
reads and lost updates.
Phantom reads may occur
Reading transactions block writing transactions (but not other
reading transactions)
Writing transactions block all other transactions
60. Concurrency – Isolation Levels
4. Serializable:
Provides the strictest isolation
This isolation level executes the transactions one after another
serially rather than concurrently.
This isolation level doesn’t permit phantom reads.
61. Concurrency – Isolation Levels
Isolation Lost Updates Dirty Reads Non- Phantom
Level Repeatable Reads
Reads
Read - May Occur May Occur May Occur
Uncommitted
Read - - May Occur May Occur
Committed
Repeatable - - - May Occur
Read
Serializable - - - -
• There is no DBMS allows “Lost Updates”
62. Concurrency – Isolation Levels
Isolation Level Write Lock Read Lock Range Lock
Read Exclusive - -
Uncommitted
Read Committed Exclusive Shared -
Repeatable Read Exclusive Exclusive -
Serializable Exclusive Exclusive Exclusive
• Read-Lock: Allow other transactions to read but not to write.
• Share Read-Lock: DBMS releases the read-lock as soon as the
SELECT operation is performed
• Exclusive Read-Lock: DBMS keeps the read-lock until the end of the
transaction
• Write-Lock: Block other transaction from reading or writing
• Exclusive Write-Lock: DBMS keeps the write-lock until the end of the
transaction
63. Concurrency – Isolation Levels
How to choose an isolation level:
Read Uncommitted: is extremely dangers you don’t need to
read dirty data
Serializable: most applications don’t need it; phantom reads
are not usually problematic.
This isolation level tends to scale poorly
Rather rely on pessimistic locks that forces serialized executions for
certain situations.
Repeatable Reads: with hibernate you don’t need this
isolation level as hibernate cache already gives you the
repeatable read isolation features.
Read Committed: is the most acceptable isolation level for
most of the database transactions if you use hibernate.
Use Pessimistic locks to force serialized executions for certain
cases.
64. Concurrency – Isolation Levels in Hibernate
Setting an isolation level:
Use hibernate configuration option:
(hibernate.connection.isolation)
This option values:
1. Read Uncommitted
2. Read Committed
4. Repeatable Reads
8. Serializable
65. Concurrency – Pessimistic Locks
Obtaining Additional Isolation Guarantees
For the following example assume Read Committed
isolation is used:
Item i = (Item) session.get(Item.class, 123);
// May be another concurrent trxn changes the price
i.setPrice(i.getPrice() - 10);
tx.commit();// will sync with db
Tx B 4. Commit
6. Commit
Tx A
66. Concurrency – Pessimistic Locks
For the previous example, what is the expected
behavior, if the isolation level used was repeatable
read?
To prevent the previous situation you can either:
Increase the isolation level to serializable at application-
level
Or keep the isolation level as it is and use pessimistic lock
for this specific case.
Item i = (Item)
session.get(Item.class, 123, LockMode.UPGRADE); //
select … for update
i.setPrice(i.getPrice() - 10);
tx.commit();// will sync with db
69. Session Scope – Session Per Operation
Session per operation
This is an anti-pattern
Open and close a session for each database call
Session scope is the same as the transaction scope.
Remember openSession()
70. Session Scope – Session Per Request
Session Per Request
This is the most common transaction pattern
A single session will be used to process single user event.
Session scope is the same as the transaction scope.
Remember getCurrentSession().
71. Session Scope – Session Per Request with
Detached Objects
Session Per Request with Detached Objects
This implementation pattern can be used for long
conversations (for example wizard dialog)
Objects are held in detached state during user think-time.
Any modification of these objects is made persistent
manually through reattachment or merging.
Automatic Versioning must be enabled to isolate
concurrent conversations.
The conversation in this implementation is not Atomic as
the conversation spans several database transactions
73. Session Scope – Session Per Conversation
Session per Conversation
Extend a persistence context to span the whole conversation
You have to disable automatic flush
Persistence context isn’t closed after completing a user
request. It is just disconnected from the database.
(session.disconnect())
When the user continues in the conversation, the persistence
context re-connect to the database (session.connect()).
At the end of the conversation, the persistence context will be
synchronized with the database and closed.
This implementation eliminate the detached object state.
Automatic Versioning must be enabled to isolate concurrent
conversations.
The conversation in this implementation is Atomic as the
changes are flushed until the last step.
76. Caching - Essentials
The cache keeps a representation of current
database state close to the application to avoid a
database hit.
Caching Scopes:
Session-Scope Cache
Process-Scope Cache (Shared for the entire JVM)
Cluster-Scope Cache (Shared between multiple
processes)
77. Caching – Hibernate Caching
Hibernate uses different types of caches:
First-Level Cache
Session-Scoped Cache
Mandatory
Lookup using the entity primary key using
session.load()/session.get()
Second-Level Cache
Process-Scoped Cache & Cluster-Scoped Cache; depends on
the caching provider (SessionFactory-scope)
Optional; to enable it set
hibernate.cache.use_second_level_cache=true
You need to define a cache provider using
hibernate.cache.region.factory_class
Lookup using the entity primary key using
session.load()/session.get() methods
78. Caching – Hibernate Caching
Query Cache
Process-Scoped Cache
Optional; to enable it set hibernate.cache.use_query_cache=true
You need to define a cache provider using
hibernate.cache.region.factory_class
Second-Level Cache can be disabled; but Query Cache will be useless
Hibernate perform the lookup using the query and the parameters;
{query, params}
It only caches the entity Id; {„from Emp where name=?‟, „ahmad‟}:
4, 6, 7
For each query/params a new cache entry will be added.
Hibernate uses Query Cache to get the entity id, then it will get the entity
using:
First-Level Cache
if it is not in the first-level cache, it will use the Second-level cache (if enabled)
If it is not there, it will execute a direct hit to the database using the primary key.
If hibernate updated the table, then hibernate will evict the query cache
automatically.
79. Caching – Second-Level Cache
Good Candidates for Second-Level Cache:
Data changes rarely
Non-Critical Date
Non-Financial Data
Data that is local to the application and not shared
80. Caching – Second-Level Cache Configuration
To enable Second-Level Cache
hibernate.cache.provider_class =
org.hibernate.cache.EhCacheProvider
hibernate.cache.use_second_level_cache = true
To Control the Second-Level Cache
Programmatically:
SessionFactory.evict()
To Control the Query Cache Programmatically
SessionFactory.evictQueries()
82. Lazy Loading – Object-Retrieval Options
Hibernate provides the following ways to get objects
out of the database:
Navigating the object graph; For example:
user.getAddr().getCity()
Retrieval by identifier; using get() and load() methods
Criteria interface
HQL
Native SQL Queries
83. Lazy Loading - Essentials
Hibernate by default loads only the objects you’re
querying for.
Item item = (Item) session.load(Item.class, new
Long(123));
Session.load() creates a proxy that looks like
the real thing
84. Lazy Loading - Proxy
Proxy is a placeholder that is generated at runtime.
Proxy holds only the entity Id.
Proxy triggers the loading of the real object when it’s
accessed for the first time.
By default, hibernate fetches associated objects and
collections lazily.
85. Lazy Loading - Proxy
Item item = (Item) session.load(Item.class, 1L);
item.getId();
item.getDescription(); // Initialize the proxy
After line 3, the real object is initialized but the
associated objects are lazily loaded:
86. Lazy Loading - Proxy
To disable proxy generation for a particular entity:
<class name="CompanyBean" table="company" lazy="false">
session.load(CompanyBean.class, new Integer(2));// no
proxy
To disable proxy generation for a particular
association (Eager Loading):
<class name="UserBean" table="user">
<many-to-one name=“address" column=“addId”
class = “Address" not-null="true“
lazy="false"/>
</class>
87. Lazy Loading – N+1 Selects Problem
When the lazy loading is enabled, if you access any
associated proxy, a second SELECT statement is
executed.
1. List allItems = session.createCriteria(Item.class).list();
// suppose allItems size is 3
2. ((Item) allItems.get(0)).getSeller().getName();
3. ((Item) allItems.get(1)).getSeller().getName();
4. ((Item) allItems.get(2)).getSeller().getName();
Line 1 will trigger the execution for:
Line 1: select * from ITEM
While lines 2 to 4 will trigger the execution for:
Line 2: Select * from User where id = ?
Line 3: Select * from User where id = ?
Line 4: Select * from User where id = ?
88. Lazy Loading – N+1 Selects Problem
N+1 Selects Problem Solution
Batch Fetching
Global Fetch Join
Dynamic Fetch Join
89. Lazy Loading – N+1 Selects Problem – Batch
Fetching
Batch Fetching
Using the Batch Fetching, if one proxy must be
fetched, go ahead and initialize several association in the
same time.
Batch Fetching is a blind-guess optimization
You make a guess and apply a batch size to your class mapping
file, for example:
<class name="User” table="USERS” batch-size=“3">
The result is n/3+1 SQL statements
SELECT * FROM item
SELECT FROM seller WHERE itemId IN (?,?,?)
90. Lazy Loading – N+1 Selects Problem – Global
Fetch Join
Global Fetch Join:
Don’t use it unless you really need it; for example:
”Every time I need an Item, I also need the seller of that item”
<class name="Item" table="ITEM">
<many-to-one name="seller” class="User”
column="SELLER_ID” fetch="join"/>
</class>
The Following SQL statement will be executed
select i.*, u.* from ITEM i left outer join USERS u on
i.SELLER_ID = u.USER_ID where i.ITEM_ID = ?
To control the number of joined tables use the following config
hibernate.max_fetch_depth=3
The Recommended value is 1-5 tables
91. Lazy Loading – N+1 Selects Problem – Dynamic
Fetch Join
Dynamic Fetch Join
Using HQL
from UserBean u join fetch u.company c where u.name
like '%a%' and c.name = 'STS‟
Select [DISTINCT] u from UserBean u left join fetch
u.addresses a where u.name like '%a%' and a.city =
„amman‟
Using Criteria
Criteria criteira = session.createCriteria(User.class)
.createCriteria(“company”).add…
Criteria criteira = session.createCriteria(User.class)
.createAlias(“company”, “c”).add…
Criteria criteira = session.createCriteria(User.class)
.createAlias(“company”, “c”, JoinType.RIGHT_OUTER_JOIN).
add…
93. Bulk Processing – Updating and Deleting
Batch Processing enables you to update/delete
objects without retrieving them
Query q = session.createQuery("update Item i set
i.isActive = :isActive");
q.setBoolean("isActive", true);
int updatedItems = q.executeUpdate();
94. Bulk Processing – Copying
insert a new entity from another selected entity. For
example:
VIPCustomer must be subclass of Customer.
Session session = sessionFactory.openSession();
Transaction tx = session.beginTransaction();
String hqlInsert =
"insert into VIPCustomer (id, name) select
c.id, c.name from Customer c where c.balance > 10000";
int createdEntities =
s.createQuery( hqlInsert ).executeUpdate();
tx.commit();
session.close();
95. Batch Processing
Example:
ScrollableResults itemCursor =
session.createQuery("from Item")
.scroll(); // cursor is a pointer to a rs that stays in DB
int count=0;
while ( itemCursor.next() ) {
Item item = (Item) itemCursor.get(0);
modifyItem(item);
if ( ++count % 100 == 0 ) {
session.flush();session.clear();
}
}
tx.commit();session.close();
96. Batch Processing - Tuning
For best performance set the value of
hibernate.jdbc.batch_size as the size of your
procedure batch. (100 in our previous example)
Disable the second-level cache for that persistent
class.
98. Hibernate Annotations - Essentials
Hibernate Annotations provides annotation-based
mapping metadata.
Metadata source:
Annotations
Can be overridden using XML
Hibernate Annotations include
Standard JPA annotations
Hibernate-specific extension annotations
99. Hibernate Annotations – Marking POJO as
persistent entity
To mark a POJO as persistent entity use @Entity
annotation at class-level
@Entity
public class Flight implements Serializable {
In this example, Flight class is mapped to Flight table
To define the table, category, and schema names
use @Table annotation
@Entity @Table(name="tbl_sky")
public class Sky
You can use @Table to define unique constraints
@Table(name="tbl_sky",
uniqueConstraints = {@UniqueConstraint(columnNames={"month
", "day"})} )
100. Hibernate Annotations - Define Identity
property
To declare the identifier property of a persistent
entity, use @Id annotation at field level
@Id
private long id;
To define the identifier generation strategy use
@GeneratedValue annotation
@Id @GeneratedValue(strategy=GenerationType.SEQUENCE, g
enerator="SEQ_STORE")
private long id;
@Id @GeneratedValue(strategy=GenerationType.IDENTITY)
private long id;
101. Hibernate Annotations - Define Basic
Property
Every non-static property of an entity is considered
persistent, unless you annotate it as @Transient
@Transient
private String message;
To define the column name, constraints and length
use @Column annotation at field level
@Column(updatable = false, name = "flight_name", nullable
= false, length=50, unique=false)
private String name;
102. Hibernate Annotations - Association
One-to-One Association
@OneToOne(cascade = CascadeType.ALL)
@PrimaryKeyJoinColumn
private Heart heart;
Many-to-One Association
@ManyToOne( cascade = {CascadeType.PERSIST, CascadeType.MERGE})
@JoinColumn(name="COMP_ID")
private Company comp;
104. Hibernate Validator - Essentials
Hibernate Validator defines a metadata model and
API for JavaBean validation.
Metadata source:
Annotations
Can be overridden using XML
The API is not tied to a specific application tier.
105. Hibernate Validator – Applying Constraints
Example:
public class Car {
@NotNull
private String manufacturer;
@NotNull @Size(min = 2, max = 14)
private String licensePlate;
@Min(2)
private int seatCount;
}
106. Hibernate Validator – Validating Constraints
Car car = new Car("Morris", "DD-AB-123", 1);
Set<ConstraintViolation<Car>> constraintViolations =
validator.validate(car);
Call to save() operation may cause an immediate SQL INSERT statement; it depends on the id generatorIt is recommended to fully initialize the object before saving it; this will eliminate the need for an extra update statement
get():Return null if object not existAlways return all the attributes (no-lazy loading)load():throws ObjectNotFoundExceptionLazy loading
Here we are using the automatic dirty check service to verify whether the object has been modified or notNote: there is no need to explicitly call the session.flush() method
Do I have to load an object to delete it? Yes, an object has to be loaded into the persistence context; an instance has to be in persistent state to be removed (note that a proxy is good enough; use load() not get()). The reason is the interceptors.Otherwise use Bulk operations.hibernate.use_identifier_rollbackconfiguration option: Hibernate sets the database identifier property of the deleted item to null after deletion and flushing. It’s then a clean transient instance.
Update method re-attach the detached object and mark it as dirty which implies it will be propagated to the databaseSelect-before-update option will force hibernate to execute SQL SELECT statement before executing SQL UPDATE statement to ensure it is really dirty; if the entity is really dirty then the changes will be propagated to the database.Select-before-update configuration option: This option will enforce hibernate to verify whether the object is really dirty or not
Lock method re-attach the detached object without forcing hibernate to update the database
The exception is:NonUniqueObjectExceptionException message is: persistent instance with the same database identifier is already associated with the Session!
Here we are only reading data; is it faster to roll-back transactions?There is no performance difference between roll-back and commitAlways commit your transaction, and roll-back only if there is an exception
As you are not using Hibernate Transactions, then you need to flush the sessions by yourself, unless the options in the next slide are enabled
There is no DBMS allows “lost updates”
Exclusive Write Lock;
Sync the update is a heavy process; so you need to cache data that are rarely change
Always prefer the above in the this order
In figure, Associated proxies holds only the association foreign keys
Imagine we have 100 associated objectsDisable association lazy loading will not solve the issue as another immediate SELECT statement will be executed