1)What are the key differences between a corelated query and sub query?
In the Corelated query for every instance of outer query,inner query is executed and based on the result of the inner query outer query displays the records. In the sub query depending upon the inner query the outer query will be executed.
2)What is a dead lock?
If two or more users are accessing the same data source with different operations then all the user resources are in dead lock.
3)What are indexes ? Name the various types of Indexes available along with the usage (i.e where and which type of index has to be used?)
Index provides direct access to the data.
The Various types of indexes are Btree Index,Function-Based Index,Bitmap Index,Domain Index
4)What are Autonomous transaction ?
Autonomous is an independent transaction which has to be commited or rollbacked before passing the control to the parent transaction
5)What is mutation error and how to overcome this?
The mutation error is occurred when the row-level trigger access the same table while executing
6)What is a REF cursor?
REF cursor is used to return Record set to be returned from stored procedures,functions and packages.
ref cursor is a simple cursor but it is a datatype that allow developers to declare cursor variable
ref cursor has 2 types:-
strong ref cursor where we mention the return type with rowtype.
weak cursor where we don't mention return type.the advantage of this is that we can use weak cursor with any query.it is not rowtype bounded.
for ex:-strong ref cursor
type curvar_type is ref cursor return emp%rowtype;
weak cursor:-
type curvar_type is ref cursor is
7)What is a materialized view? Where should this be used and what are the advantages of using this?
Materialized views can hold data.They are used to update the remote data base.
8)What is a bulk collect?
Bulk collect facilitates high-speed retrieval of data.Bulk collect is used to fetch multiple rows into one or more collections,rather than individual variables or records.
9)What is dynamic SQL? How does a dynamic sql get fired?
Dynamic SQL is one which is executed at run time.
10)What is rowchaining?
Row Chaining is the storage of data in a chain of blocks.This primarily occurs in lob,clob,blob or big varchar2 data types.
11)what a table is dropped, what are the associated objects that get invalidated / dropped?
Objects which get invalid are Views,Synonyms. Objects which get droped are triggers,indexes,constaints
12) Difference between POST-TEXT-ITEM and WHEN-VALIDATE-ITEM?
Post-text-item will fire every time your cursor moves out of the field. It doesn't matter whether a value is newly entered or is being queried from database. This will fire in both cases.
When-validate-item will fire only when you have entered new value or you change an exixting value of an item. This means that you this will fire only when the status is CHANGED or NEW.
13) This Question is for diff b/w POST-TEXT-ITEM and WHEN-VALIDATE-ITEM?
Suppose you have two fields 'A' and 'B'
A will Have initial value --> 100
Now u want go to change the Value of A to ---> 200 (WVI will be fire)It will not give any error
And in When Validate Item u give a condition ---> If a<500
And u r assigning the value to 'A' in POST-TEXT-ITEM --> now the POST-TEXT-ITEm will fire and the value of 'A' should be change to 700.
Now the Question is WHEN-VALIDATE-ITEM will raise error or not?
Yes, It will fire and it will give an error message. for when validate item there is no need to change the value if it is change as internally then also WHEN-VALIDATE-ITEM will fire.
Question: Why are PL/SQL ref cursors important to PL/SQL performance?
Answer: The ref cursor is a "pointer" data types that allows you to quickly reference any cursor result (usually an internal PL/SQL table array) with data values kept in super-fast RAM.
Definition of a ref cursor
The name "ref cursor" can be confusing because a ref cursor variable is not a cursor, but a variable that points to a cursor. In essence, a ref cursor allows you to store the result of a "bulk collect" from a table (or any PL/SQL cursor return set) into a PL/SQL table (RAM array) and then reference the array with the ref cursor as a pointer. Here is a simple example of using a ref cursor.
Once you load a PL/SQL table (an in-RAM array), a ref cursor allows you to reference the array within your PL/SQL code without having to pass the array to a new procedure or function. Instead of passing the "baggage" of the actual RAM array, you simple pass the ref cursor, which points to the array values.
Sunday, 30 December 2012
Important Sql Queries
1)How to display "Experience of employee"?e.g. 3 years 4 months 19 days?
select floor((trunc(sysdate) - to_date('01-JAN-01'))/365) years,
floor(mod((trunc(sysdate)- to_date('01-JAN-01')),365)/30 ) months ,
mod(mod((trunc(sysdate)- to_date('01-JAN-01')),365),30) days from dual;
2) How to delete a duplicate records in a table without using rowid?
delete from student
where id=(select max(id) from student group by id
having count(id||name)>1);
for emp table:
delete from emp
where empno=(select max(empno) from emp group by empno
having count(empno||ename)>1)
3)How to retrieve record number 70 from a table of 100 records?
(select * from emp
minus
select * from emp where rownum < 70)
intersect
select * from emp where rownum <=70;
4)What is Materialized View?
A materialized view is a database object that contains the results of a query. They are local copies of data located remotely or used to create summary tables based on aggregation of a tables data. Materialized views which store data based on the remote tables are also know as snapshots.
5) What is normalazation,types with examples. with queries of all types?
definition :1) normalization is organizing data in database.
2) Normalization is a process of eleminating the redundancy and increasing the integrity.
There are 5 normal forms. It is necessary for any database to be in the third normal form to maintain referential integrity and non-redundance.
First Normal Form:
Every field of a table (row,col) must contain an atomic value
Second Normal Form:
All columns of a table must depend entirely on the primary key column.
Third Normal Form:
All columns of a table must depend on all columns of a composite primary key.
Fourth Normal Form:
A table must not contain two or more independent multi-valued facts. This normal form is often avoided for maintenance reasons.
Fifth Normal Form:
is about symmetric dependencies.
Each normal form assumes that the table is already in the earlier normal form.
6)Check Constraint
CREATE TABLE temp(
eno NUMBER CONSTRAINTS pk_eno PRIMARY KEY,
gender CHAR(1) CHECK (gender IN( 'M','F')));
7)Whats the back end processes when we type "Select * from Table"?
First it will look into the System Global Area (SGA) weather the query is been exectued earlier.
If it exist, it would retrive the same output present in memory.
If not the query we typed is complied and the resulting parse tree and excution plan is been stored in SGA. Then query gets executed and output is given to the application.
8) PRE-QUERY
The Pre-Query trigger fires just before Form Builder issues the SELECT statement to the database, after the operator has defined the example record by entering query criteria in Enter Query mode.
9) POST-QUERY
The Post-Query trigger fires after the query has executed, when Form Builder is fetching records into the form. Post-Query fires once for each record retrieved into the form, which allows you to read and set the values of items in a fetched record before the operator sees them displayed in the block.
10) With out Using Like Function
substr(name,1,1)='S' and
substr(name,-1)='N'
Instr(name,'S')= 1 and
Instr(name,'N')= length(name)
'S'||Substr(name,2,length(name)-1)||'n' = name
11) How to find Duplicate Records
select * from emp where rowid not in ( select min(rowid) from emp group by ename);
12) Pivote table ( change rows as columns)
select decode( r, 1, to_char(sal), 2, sno)
from salary, (select rownum r from all_objects where rownum <= 2 )
where sno = 1
select stno,
max(decode(rn,1,course_code)) s1,
...
max(decode(rn,N,course_code)) sN
from ( select stno, course_code,
row_number() over (partition by stno order by course_code )
from t )
group by stno
13) Generating the Sequence Number without using Sequence
SELECT * FROM (SELECT LEVEL FROM dual CONNECT BY LEVEL < 10);
14)
SELECT EMPNO_ACN,EMPNO_DES FROM
(SELECT EMPNO EMPNO_ACN, RANK() OVER (ORDER BY EMPNO DESC) R FROM EMP ) A,
(SELECT EMPNO EMPNO_DES, RANK() OVER (ORDER BY EMPNO) X FROM EMP ) B
WHERE A.R = B.X
15)
What is the Difference between DBMS and RDBMS ?
Ans :
1. dbms is a single user system rdbms is a multiuser system
2. dbms do not store data in the form of tables while rdbms stores it in the form of tables.
3. most of dbms are not supporting client/server architecture as it is supported by rdbms.
4. most dbms do not support distributed databases
5. In DBMS store and retrieve small amount of Data In RDBMS store and retrieve large amount of Data
6. RDBMS is more secure than DBMS
7. in rdbms is relation is more important than object itself while dbms entity is more important
8. In DBMS tables are not related. In RDBMS tables are related
16)
Is there any function in oracle to check whether a value is numeric or not. I have a column which contains both numeric and alphanumeric data. I want to fetch only the numeric data from that column. Can anyone tell me how to do it?
Hi Anu,
There does not exists any build in function which checks for a numeric value instead you can write your own function as follows.
CREATE OR REPLACE FUNCTION Isnumber(p_num VARCHAR2) RETURN NUMBER
AS
a NUMBER;
BEGIN
a := p_num;
RETURN 1;
EXCEPTION WHEN OTHERS THEN
RETURN 0;
END;
/
This function can now be called in a SQL query which will retum 1 if the value passwd is numeric else it will return 0.
OR
select * from test where translate(a,'_0123456789','_') is null;
=====================================
Check Constraint
--------------------
ALTER TABLE suppliers
add CONSTRAINT check_supplier_name
CHECK (supplier_name IN ('IBM', 'Microsoft', 'Nvidia'));
How to see the Difference between two table coulmns:-
========================================================
scott@ORA817.US.ORACLE.COM> create table t1 as select * from emp where 1=0;
Table created.
scott@ORA817.US.ORACLE.COM> create table t2 as select * from t1;
Table created.
scott@ORA817.US.ORACLE.COM>
scott@ORA817.US.ORACLE.COM> alter table t2 drop column ename;
Table altered.
scott@ORA817.US.ORACLE.COM> alter table t2 modify job varchar2(10);
Table altered.
scott@ORA817.US.ORACLE.COM>
scott@ORA817.US.ORACLE.COM> column data_type format a10
scott@ORA817.US.ORACLE.COM> (
2 select 'IN T1, NOT T2', column_name,data_type,data_length
3 from user_tab_columns
4 where table_name = 'T1'
5 MINUS
6 select 'IN T1, NOT T2', column_name,data_type,data_length
7 from user_tab_columns
8 where table_name = 'T2'
9 )
10 UNION ALL
11 (
12 select 'IN T2, NOT T1', column_name,data_type,data_length
13 from user_tab_columns
14 where table_name = 'T2'
15 MINUS
16 select 'IN T2, NOT T1', column_name,data_type,data_length
17 from user_tab_columns
18 where table_name = 'T1'
19 )
20 /
'INT1,NOTT2' COLUMN_NAME DATA_TYPE DATA_LENGTH
------------- ------------------------------ ---------- -----------
IN T1, NOT T2 ENAME VARCHAR2 10
IN T1, NOT T2 JOB VARCHAR2 9
IN T2, NOT T1 JOB VARCHAR2 10
scott@ORA817.US.ORACLE.COM>
===============================================================
20)How to find out department wise second maximum salary.
select distinct d1.deptno, e1.sal
from emp e1, dept d1
where 2 = (select count(distinct e2.sal)
from emp e2, dept d2
where e2.sal >= e1.sal
and d2.deptno = e2.deptno
and d1.deptno = d2.deptno)
and d1.deptno = e1.deptno
=================================
21) In a cursor going we have 10 records now i want go to exactly for only 5th record.is it possible??
Ans:-
Yes, It is possible u can use the rowcount after fethchin
Declare
cursor cur_t is select * from t4;
rec_t t4%rowtype;
i number(2);
begin
open cur_t;
for i in 1..10
LOOP
fetch cur_t into rec_t;
if cur_t%rowcount in (5) --or IF i in(5) also we can use
then
dbms_output.put_line('no: ' || rec_t.no || ', name: ' || rec_t.name);
end if;
end loop;
end;
=======================================================
Mutating table Example:-
-----------------
Example 1
Select in a mutating table from a row trigger (Tested on Oracle 8.1.7)
We want to explain this situation on an example. We have two tables "A" and "B". "A" is the master table and "B" the detail table. We specified a foreign key between "B" and "A" with the CASCADE DELETE option.
Here are the CREATE statements
drop table B;
drop table A;
create table A (
ida number not null,
vala varchar2(10),
primary key(ida));
create table B (
idb number,
valb varchar2(10),
foreign key (idb) references A (ida) on delete cascade)
/
create or replace trigger b_br
after delete on B
for each row
declare
n integer;
begin
select count(*) into n from A;
dbms_output.put_line('there are ' || n || ' rows in A');
dbms_output.put_line('after statment on B');
dbms_output.new_line;
end;
/
insert into A values(1,'Table A');
insert into A values(2,'Table A');
insert into B values(1,'Table B');
insert into B values(1,'Table B');
commit;
set serveroutput on;
delete from A where idA = 1;
ERROR at line 1:
ORA-04091: table SCOTT.A is mutating, trigger/function may not see
ORA-06512: at "SCOTT.B_BR", line 4
ORA-04088: error during execution of trigger 'SCOTT.B_BR'
1) Retrive the last 10 rows from a table
select rownum from emp
group by rownum
having rownum >=(select max(rownum)-10 from emp)
(or)
select * from (select rownum a, CLASS_CODE,CLASS_DESC from clm)
where a > ( select (max(rownum)-10) from clm)
2) How to Retrive the Duplicate rows from a table
SELECT <COLUMN_NAME> , COUNT(*) FROM TABLE.NAME
GROUP BY <COLUMN_NAME>
HAVING COUNT(*) > 1
3) Department wise 2nd heighest Salary
select distinct d1.deptno, e1.sal
from emp e1, dept d1
where 2 = (select count(distinct e2.sal)
from emp e2, dept d2
where e2.sal >= e1.sal
and d2.deptno = e2.deptno
and d1.deptno = d2.deptno)
and d1.deptno = e1.deptno
===================================================
Select max(e1.sal),e1.deptno
from emp e1
where sal <=(select max(sal)
from emp e2
where e2.deptno = e1.deptno)
group by e1.deptno
Hewitt Associates(26-JUN-07):-
==============================
1)To get the balance based on DRCR
select SUM(DECODE(DRCR,'C',amount,0))-SUM(DECODE(DRCR,'D',amount,0)) from account
2)
Column
=========
3
2
1
A001
A002
A003
1001
1002
1003
Using the Order by Whats the Order will come
Answer is
SNO -->VARCHAR2 field
----
1
1000
1001
2
3
A001
A002
A003
a004
a200
It will Give the Assending/desending Order Based on Ascii Value...
Inline View:-
===============
SELECT ename,deptno
FROM (SELECT ename,deptno
FROM emp)a
(SELECT deptno
FROM emp)b
WHERE a.deptno = b.deptno
--->select sal,sum(sal) over (order by sal) TOTALSALARY from salary;
SAL TOTALSALARY
---------- -----------
1000 1000
2000 3000
3000 6000
4000 10000
select floor((trunc(sysdate) - to_date('01-JAN-01'))/365) years,
floor(mod((trunc(sysdate)- to_date('01-JAN-01')),365)/30 ) months ,
mod(mod((trunc(sysdate)- to_date('01-JAN-01')),365),30) days from dual;
2) How to delete a duplicate records in a table without using rowid?
delete from student
where id=(select max(id) from student group by id
having count(id||name)>1);
for emp table:
delete from emp
where empno=(select max(empno) from emp group by empno
having count(empno||ename)>1)
3)How to retrieve record number 70 from a table of 100 records?
(select * from emp
minus
select * from emp where rownum < 70)
intersect
select * from emp where rownum <=70;
4)What is Materialized View?
A materialized view is a database object that contains the results of a query. They are local copies of data located remotely or used to create summary tables based on aggregation of a tables data. Materialized views which store data based on the remote tables are also know as snapshots.
5) What is normalazation,types with examples. with queries of all types?
definition :1) normalization is organizing data in database.
2) Normalization is a process of eleminating the redundancy and increasing the integrity.
There are 5 normal forms. It is necessary for any database to be in the third normal form to maintain referential integrity and non-redundance.
First Normal Form:
Every field of a table (row,col) must contain an atomic value
Second Normal Form:
All columns of a table must depend entirely on the primary key column.
Third Normal Form:
All columns of a table must depend on all columns of a composite primary key.
Fourth Normal Form:
A table must not contain two or more independent multi-valued facts. This normal form is often avoided for maintenance reasons.
Fifth Normal Form:
is about symmetric dependencies.
Each normal form assumes that the table is already in the earlier normal form.
6)Check Constraint
CREATE TABLE temp(
eno NUMBER CONSTRAINTS pk_eno PRIMARY KEY,
gender CHAR(1) CHECK (gender IN( 'M','F')));
7)Whats the back end processes when we type "Select * from Table"?
First it will look into the System Global Area (SGA) weather the query is been exectued earlier.
If it exist, it would retrive the same output present in memory.
If not the query we typed is complied and the resulting parse tree and excution plan is been stored in SGA. Then query gets executed and output is given to the application.
8) PRE-QUERY
The Pre-Query trigger fires just before Form Builder issues the SELECT statement to the database, after the operator has defined the example record by entering query criteria in Enter Query mode.
9) POST-QUERY
The Post-Query trigger fires after the query has executed, when Form Builder is fetching records into the form. Post-Query fires once for each record retrieved into the form, which allows you to read and set the values of items in a fetched record before the operator sees them displayed in the block.
10) With out Using Like Function
substr(name,1,1)='S' and
substr(name,-1)='N'
Instr(name,'S')= 1 and
Instr(name,'N')= length(name)
'S'||Substr(name,2,length(name)-1)||'n' = name
11) How to find Duplicate Records
select * from emp where rowid not in ( select min(rowid) from emp group by ename);
12) Pivote table ( change rows as columns)
select decode( r, 1, to_char(sal), 2, sno)
from salary, (select rownum r from all_objects where rownum <= 2 )
where sno = 1
select stno,
max(decode(rn,1,course_code)) s1,
...
max(decode(rn,N,course_code)) sN
from ( select stno, course_code,
row_number() over (partition by stno order by course_code )
from t )
group by stno
13) Generating the Sequence Number without using Sequence
SELECT * FROM (SELECT LEVEL FROM dual CONNECT BY LEVEL < 10);
14)
SELECT EMPNO_ACN,EMPNO_DES FROM
(SELECT EMPNO EMPNO_ACN, RANK() OVER (ORDER BY EMPNO DESC) R FROM EMP ) A,
(SELECT EMPNO EMPNO_DES, RANK() OVER (ORDER BY EMPNO) X FROM EMP ) B
WHERE A.R = B.X
15)
What is the Difference between DBMS and RDBMS ?
Ans :
1. dbms is a single user system rdbms is a multiuser system
2. dbms do not store data in the form of tables while rdbms stores it in the form of tables.
3. most of dbms are not supporting client/server architecture as it is supported by rdbms.
4. most dbms do not support distributed databases
5. In DBMS store and retrieve small amount of Data In RDBMS store and retrieve large amount of Data
6. RDBMS is more secure than DBMS
7. in rdbms is relation is more important than object itself while dbms entity is more important
8. In DBMS tables are not related. In RDBMS tables are related
16)
Is there any function in oracle to check whether a value is numeric or not. I have a column which contains both numeric and alphanumeric data. I want to fetch only the numeric data from that column. Can anyone tell me how to do it?
Hi Anu,
There does not exists any build in function which checks for a numeric value instead you can write your own function as follows.
CREATE OR REPLACE FUNCTION Isnumber(p_num VARCHAR2) RETURN NUMBER
AS
a NUMBER;
BEGIN
a := p_num;
RETURN 1;
EXCEPTION WHEN OTHERS THEN
RETURN 0;
END;
/
This function can now be called in a SQL query which will retum 1 if the value passwd is numeric else it will return 0.
OR
select * from test where translate(a,'_0123456789','_') is null;
=====================================
Check Constraint
--------------------
ALTER TABLE suppliers
add CONSTRAINT check_supplier_name
CHECK (supplier_name IN ('IBM', 'Microsoft', 'Nvidia'));
How to see the Difference between two table coulmns:-
========================================================
scott@ORA817.US.ORACLE.COM> create table t1 as select * from emp where 1=0;
Table created.
scott@ORA817.US.ORACLE.COM> create table t2 as select * from t1;
Table created.
scott@ORA817.US.ORACLE.COM>
scott@ORA817.US.ORACLE.COM> alter table t2 drop column ename;
Table altered.
scott@ORA817.US.ORACLE.COM> alter table t2 modify job varchar2(10);
Table altered.
scott@ORA817.US.ORACLE.COM>
scott@ORA817.US.ORACLE.COM> column data_type format a10
scott@ORA817.US.ORACLE.COM> (
2 select 'IN T1, NOT T2', column_name,data_type,data_length
3 from user_tab_columns
4 where table_name = 'T1'
5 MINUS
6 select 'IN T1, NOT T2', column_name,data_type,data_length
7 from user_tab_columns
8 where table_name = 'T2'
9 )
10 UNION ALL
11 (
12 select 'IN T2, NOT T1', column_name,data_type,data_length
13 from user_tab_columns
14 where table_name = 'T2'
15 MINUS
16 select 'IN T2, NOT T1', column_name,data_type,data_length
17 from user_tab_columns
18 where table_name = 'T1'
19 )
20 /
'INT1,NOTT2' COLUMN_NAME DATA_TYPE DATA_LENGTH
------------- ------------------------------ ---------- -----------
IN T1, NOT T2 ENAME VARCHAR2 10
IN T1, NOT T2 JOB VARCHAR2 9
IN T2, NOT T1 JOB VARCHAR2 10
scott@ORA817.US.ORACLE.COM>
===============================================================
20)How to find out department wise second maximum salary.
select distinct d1.deptno, e1.sal
from emp e1, dept d1
where 2 = (select count(distinct e2.sal)
from emp e2, dept d2
where e2.sal >= e1.sal
and d2.deptno = e2.deptno
and d1.deptno = d2.deptno)
and d1.deptno = e1.deptno
=================================
21) In a cursor going we have 10 records now i want go to exactly for only 5th record.is it possible??
Ans:-
Yes, It is possible u can use the rowcount after fethchin
Declare
cursor cur_t is select * from t4;
rec_t t4%rowtype;
i number(2);
begin
open cur_t;
for i in 1..10
LOOP
fetch cur_t into rec_t;
if cur_t%rowcount in (5) --or IF i in(5) also we can use
then
dbms_output.put_line('no: ' || rec_t.no || ', name: ' || rec_t.name);
end if;
end loop;
end;
=======================================================
Mutating table Example:-
-----------------
Example 1
Select in a mutating table from a row trigger (Tested on Oracle 8.1.7)
We want to explain this situation on an example. We have two tables "A" and "B". "A" is the master table and "B" the detail table. We specified a foreign key between "B" and "A" with the CASCADE DELETE option.
Here are the CREATE statements
drop table B;
drop table A;
create table A (
ida number not null,
vala varchar2(10),
primary key(ida));
create table B (
idb number,
valb varchar2(10),
foreign key (idb) references A (ida) on delete cascade)
/
create or replace trigger b_br
after delete on B
for each row
declare
n integer;
begin
select count(*) into n from A;
dbms_output.put_line('there are ' || n || ' rows in A');
dbms_output.put_line('after statment on B');
dbms_output.new_line;
end;
/
insert into A values(1,'Table A');
insert into A values(2,'Table A');
insert into B values(1,'Table B');
insert into B values(1,'Table B');
commit;
set serveroutput on;
delete from A where idA = 1;
ERROR at line 1:
ORA-04091: table SCOTT.A is mutating, trigger/function may not see
ORA-06512: at "SCOTT.B_BR", line 4
ORA-04088: error during execution of trigger 'SCOTT.B_BR'
1) Retrive the last 10 rows from a table
select rownum from emp
group by rownum
having rownum >=(select max(rownum)-10 from emp)
(or)
select * from (select rownum a, CLASS_CODE,CLASS_DESC from clm)
where a > ( select (max(rownum)-10) from clm)
2) How to Retrive the Duplicate rows from a table
SELECT <COLUMN_NAME> , COUNT(*) FROM TABLE.NAME
GROUP BY <COLUMN_NAME>
HAVING COUNT(*) > 1
3) Department wise 2nd heighest Salary
select distinct d1.deptno, e1.sal
from emp e1, dept d1
where 2 = (select count(distinct e2.sal)
from emp e2, dept d2
where e2.sal >= e1.sal
and d2.deptno = e2.deptno
and d1.deptno = d2.deptno)
and d1.deptno = e1.deptno
===================================================
Select max(e1.sal),e1.deptno
from emp e1
where sal <=(select max(sal)
from emp e2
where e2.deptno = e1.deptno)
group by e1.deptno
Hewitt Associates(26-JUN-07):-
==============================
1)To get the balance based on DRCR
select SUM(DECODE(DRCR,'C',amount,0))-SUM(DECODE(DRCR,'D',amount,0)) from account
2)
Column
=========
3
2
1
A001
A002
A003
1001
1002
1003
Using the Order by Whats the Order will come
Answer is
SNO -->VARCHAR2 field
----
1
1000
1001
2
3
A001
A002
A003
a004
a200
It will Give the Assending/desending Order Based on Ascii Value...
Inline View:-
===============
SELECT ename,deptno
FROM (SELECT ename,deptno
FROM emp)a
(SELECT deptno
FROM emp)b
WHERE a.deptno = b.deptno
--->select sal,sum(sal) over (order by sal) TOTALSALARY from salary;
SAL TOTALSALARY
---------- -----------
1000 1000
2000 3000
3000 6000
4000 10000
checking that the string does not contain any special character
checking that the string does not contain any special character other then '_':
SELECT NVL(LENGTH(LTRIM(RTRIM(TRANSLATE('RAGHA **NDRA','ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz1234567890_',' ')))), 0)
FROM DUAL;
other way:
op_type1 := replace(:blk_cgtm_op_type.op_type,' ','*');
FOR i IN 1 .. length(op_type1)
LOOP
op_char := ASCII(SUBSTR(op_type1,i,1));
IF OP_CHAR not BETWEEN 48 AND 57
and OP_CHAR not BETWEEN 65 AND 90
and OP_CHAR not BETWEEN 97 AND 122
then
l_found := 1;
end if;
END LOOP;
IF L_FOUND = 1
THEN
alert_return := ovpkcs.fn_dispmsg('CG-MNT18',';', ';');
Raise FORM_TRIGGER_FAILURE;
end if;
*********************************
SELECT NVL(LENGTH(LTRIM(RTRIM(TRANSLATE('RAGHA **NDRA','ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz1234567890_',' ')))), 0)
FROM DUAL;
other way:
op_type1 := replace(:blk_cgtm_op_type.op_type,' ','*');
FOR i IN 1 .. length(op_type1)
LOOP
op_char := ASCII(SUBSTR(op_type1,i,1));
IF OP_CHAR not BETWEEN 48 AND 57
and OP_CHAR not BETWEEN 65 AND 90
and OP_CHAR not BETWEEN 97 AND 122
then
l_found := 1;
end if;
END LOOP;
IF L_FOUND = 1
THEN
alert_return := ovpkcs.fn_dispmsg('CG-MNT18',';', ';');
Raise FORM_TRIGGER_FAILURE;
end if;
*********************************
Sunday, 14 October 2012
Wednesday, 26 September 2012
Richer Associations
5.1 Using Lazy Associations
First rich, then lazy? I suppose that could be a plausible story about someone, as long as it happened in that order. But this really is an object relational mapping topic of some importance. As your data model grows, adding associations between objects and tables, your program gains power, which is great. But you often end up with a large fraction of your objects somehow linked to each other. So what happens when you load one of the objects that is part of a huge interrelated cluster? Since, as you've seen, you can move from one object to its associated objects just by traversing properties, it seems you'd have to load all the associated objects when you load any of them. For small databases this is fine, but in general your database can't hold a lot more than the memory available to your program. Uh oh! And even if it does all fit, rarely will you actually access most of those objects, so it's a waste to load them all.
Luckily, this problem was anticipated by the designers of object/relational mapping software, including Hibernate. The trick is to configure some associations to be 'lazy,' so that associated objects aren't loaded until they're actually referenced. Hibernate will instead make a note of the linked object's identity and put off loading it until you actually try to access it. This is often done for collections like those we've been using.
5.1.1 How do I do that?
With collections, all you need to do is set the lazy attribute in the mapping declaration. For example, our track artists mapping could look like Example 5-1.
Example 5-1. Lazily initializing the track artist associations
<set name="artists" table="TRACK_ARTISTS" lazy="true">
<key column="TRACK"/>
<many-to-many class="com.oreilly.hh.Artist" column="ARTIST"/>
</set>
This would tell Hibernate to use a special lazy implementation of Set that doesn't load its contents from the database until you actually try to use them. This is done completely transparently, so you don't even notice it's taking place in your code.
Well, if it's that simple, and avoids problems with loading giant snarls of interrelated objects, why not do it all the time? The problem is that the transparency breaks down once you've closed your Hibernate session. At that point, if you try to access content from a lazy collection that hasn't been initialized (even if you've assigned the collection to a different variable, or returned it from a method call), the Hibernate-provided proxy collection can no longer access the database to perform the deferred loading of its contents, and it is forced to throw a LazyInitializationException.
NOTE
Conservation of complexity seems almost like a law of thermodynamics.
Because this can lead to unexpected crashes far away from the Hibernatespecific code, lazy initialization is turned off by default. It's your responsibility to think carefully about situations in which you need to use it, and ensure that you are doing so safely. The Hibernate reference manual goes into a bit of detail about strategies to consider.
5.1.2 What about...
...Laziness outside of collections? Caching and clustering?
It's easy to see how lazy collections can be supported, since Hibernate can provide its own special implementations of the various Collection interfaces. But what about other kinds of associations? They might benefit from on-demand loading as well.
In fact, Hibernate does support this, and almost as easily (at least from our perspective as users of its services). The way you set this up is by marking an entire persistent class as lazy="true" (this attribute goes right in the class tag of the mapping document). When you do this, Hibernate will generate a proxy class that extends (and poses as) your data class. This lazy proxy puts off actually loading the data until it is needed. Any other objects with associations to the lazy class will sneakily be given these proxy objects, rather than references to your actual data object. The first time any of the methods of the proxy object are used, it will load the real data object and delegate the method call to it. Once the data object is loaded, the proxy simply continues delegating all method calls to it.
If you want to get fancier, you can specify a specific class (or interface) to be extended (or implemented) by the proxy class, using the proxy attribute. The lazy attribute is shorthand for specifying the persistent class itself as the type to be proxied. (If this is all incomprehensible, don't worry, that just means you don't yet need this capability. By the time you do, you'll understand it!)
Naturally, the same caveats about taking care to load anything you'll need to use before closing the session apply to this kind of lazy initialization too. If you need it, you can use it, but do so with care and planning.
The Hibernate reference documentation discusses these considerations in more depth in its chapter 'Improving Performance.' Also introduced there is the fact that Hibernate can be integrated with JVM-level or even clustered object caches to boost the performance of large, distributed applications, by reducing the bottleneck of database access. When plugged in to such a cache, the mapping document lets you configure the cache behavior of classes and associations using (appropriately enough) cache tags. These configurations go beyond what we cover in this notebook, but you should be aware that they're possible in case your application would benefit from them.
NOTE
Oh, right, that's what we were going to try...
5.2 Ordered Collections
Our first goal is to store the tracks that make up an album, keeping them in the right order. Later we'll add information like the disc on which a track is found, and its position on that disc, so we can gracefully handle multi-disc albums.
5.2.1 How do I do that?
The task of keeping a collection in a particular order is actually straightforward. If that's all we cared about in organizing album tracks, we'd need only tell Hibernate to map a List or array. In our Album mapping we'd use something like Example 5-2.
Example 5-2. Simple ordered mapping of tracks for an album
<list name="tracks" table="ALBUM_TRACKS">
<key column="ALBUM_ID"/>
<index column="POSITION"/>
<many-to-many class="com.oreilly.hh.Track" column="TRACK_ID"/>
</list>
This is very much like the set mappings we've used so far (although it uses a different tag to indicate it's an ordered list and therefore maps to a java.util.List). But notice that we also need to add an index tag to establish the ordering of the list, and we need to add a column to hold the value controlling the ordering in the database. Hibernate will manage the contents of this column for us, and use it to ensure that when we get the list out of the database in the future, its contents will be in the same order in which we stored them. The column is created as an integer, and if possible, it is used as part of a composite key for the table. The mapping in Example 5-2, when used to generate a HSQLDB database schema, produces the table shown in Example 5-3.
Example 5-3. Our simple track list realized as an HSQLDB schema
[schemaexport] create table ALBUM_TRACKS (
[schemaexport] ALBUM_ID INTEGER not null,
[schemaexport] TRACK_ID INTEGER not null,
[schemaexport] POSITION INTEGER not null,
[schemaexport] primary key (ALBUM_ID, POSITION)
[schemaexport] )
It's important to understand why the POSITION column is necessary. We need to control the order in which tracks appear in an album, and there aren't any properties of the tracks themselves we can use to keep them sorted in the right order. (Imagine how annoyed you'd be if your jukebox system could only play the tracks of an album in, say, alphabetical order, regardless of the intent of the artists who created it!) The fundamental nature of relational database systems is that you get results in whatever order the system finds convenient, unless you tell it how to sort them. The POSITION column gives Hibernate a value under its control that can be used to ensure that our list is always sorted in the order in which we created it. Another way to think about this is that the order of the entries is one of the independent pieces of information we want to keep track of, so Hibernate needs a place to store it.
The corollary is also important. If there are values in your data that provide a natural order for traversal, there is no need for you to provide an index column; you don't even have to use a list. The set and map collection mappings can be configured to be sorted in Java by providing a sort attribute, or within the database itself by providing a SQL order-by attribute.[5.1] In either case, when you iterate over the contents of the collection, you'll get them in the specified order.
[5.1] The order-by attribute and SQL sorting of collections is only available if you're using Version 1.4 or later of the Java SDK, since it relies on the LinkedHashSet or LinkedHashMap classes introduced in that release.
The values in the POSITION column will always be the same values you'd use as an argument to the tracks.get() method in order to obtain the value at a particular position in the tracks list.
5.3 Augmenting Associations in Collections
All right, we've got a handle on what we need to do if we want our albums' tracks to be kept in the right order. What about the additional information we'd like to keep, such as the disc on which the track is found? When we map a collection of associations, we've seen that Hibernate creates a join table in which to store the relationships between objects. And we've just seen how to add an index column to the ALBUM_TRACKS table to maintain an ordering for the collection. Ideally, we'd like the ability to augment that table with more information of our own choosing, in order to record the other details we'd like to know about album tracks.
As it turns out, we can do just that, and in a very straightforward way.
5.3.1 How do I do that?
Up until this point we've seen two ways of getting tables into our database schema. The first was by explicitly mapping properties of a Java object onto columns of a table. The second was defining a collection of associations, and specifying the table and columns used to manage that collection. As it turns out, there's nothing that prevents us from using a single table in both ways. Some of its columns can be used directly to map to our own objects' properties, while the others can manage the mapping of a collection. This lets us achieve our goals of recording the tracks that make up an album in an ordered way, augmented by additional details to support multi-disc albums.
NOTE
This flexibility took a little getting used to but it makes sense, especially if you think about mapping objects to an existing database schema.
We'll want a new data object, AlbumTrack, to contain information about how a track is used on an album. Since we've already seen several examples of how to map full-blown entities with independent existence, and there really isn't a need for our AlbumTrack object to exist outside the context of an Album entity, this is a good opportunity to look at mapping a component. Recall that in Hibernate jargon an entity is an object that stands on its own in the persistence mechanism: it can be created, queried, and deleted independently of any other objects, and therefore has its own persistent identity (as reflected by its mandatory id property). A component, in contrast, is an object that can be saved to and retrieved from the database, but only as a subordinate part of some other entity. In this case, we'll define a list of AlbumTrack objects as a component part of our Album entity. Example 5-4 shows a mapping for the Album class that achieves this.
Example 5-4. Album.hbm.xml, the mapping definition for an Album
1 <?xml version="1.0"?>
2 <!DOCTYPE hibernate-mapping PUBLIC "-//Hibernate/Hibernate Mapping DTD 2.0//EN"
3 "http://hibernate.sourceforge.net/hibernate-mapping-2.0.dtd">
4
5 <hibernate-mapping>
6 <class name="com.oreilly.hh.Album" table="ALBUM">
7 <meta attribute="class-description">
8 Represents an album in the music database, an organized list of tracks.
9 @author Jim Elliott (with help from Hibernate)
10 </meta>
11
12 <id name="id" type="int" column="ALBUM_ID">
13 <meta attribute="scope-set">protected</meta>
14 <generator class="native"/>
15 </id>
16
17 <property name="title" type="string">
18 <meta attribute="use-in-tostring">true</meta>
19 <column name="TITLE" not-null="true" index="ALBUM_TITLE"/>
20 </property>
21
22 <property name="numDiscs" type="integer"/>
23
24 <set name="artists" table="ALBUM_ARTISTS">
25 <key column="ALBUM_ID"/>
26 <many-to-many class="com.oreilly.hh.Artist" column="ARTIST_ID"/>
27 </set>
28
29 <set name="comments" table="ALBUM_COMMENTS">
30 <key column="ALBUM_ID"/>
31 <element column="COMMENT" type="string"/>
32 </set>
33
34 <list name="tracks" table="ALBUM_TRACKS">
35 <meta attribute="use-in-tostring">true</meta>
36 <key column="ALBUM_ID"/>
37 <index column="POSITION"/>
38 <composite-element class="com.oreilly.hh.AlbumTrack">
39 <many-to-one name="track" class="com.oreilly.hh.Track">
40 <meta attribute="use-in-tostring">true</meta>
41 <column name="TRACK_ID"/>
42 </many-to-one>
43 <property name="disc" type="integer"/>
44 <property name="positionOnDisc" type="integer"/>
45 </composite-element>
46 </list>
47
48 <property name="added" type="date">
49 <meta attribute="field-description">When the album was created</meta>
50 </property>
51
52 </class>
53 </hibernate-mapping>
A lot of this is similar to mappings we've seen before, but the tracks list (starting on line 34) is worth some careful examination. The discussion gets involved, so let's step back a minute and recall exactly what we're trying to accomplish.
We want our album to keep an ordered list of the tracks that make it up, along with additional information about each track that tells which disc it's on (in case the album has multiple discs) and the track's position within the disc. This conceptual relationship is shown in the middle of Figure 5-1. The association between albums and tracks is mediated by an 'Album Tracks' object that adds disc and position information, as well as keeping them in the right order. The model of the tracks themselves is familiar (we're leaving out artist and comment information in this diagram, in an effort to keep it simpler). This model is what we've captured in the album mapping document, Example 5-4. Let's examine the details of how it was done. Later we'll look at how Hibernate turns this specification into Java code (the bottom part of Figure 5-1) and a database schema (the top part).
Figure 5-1. Models of the tables, concepts, and objects involved in representing album tracks
If you compare lines 34-46 of Example 5-4 with one of the set mappings in the preceding chapter, you'll see a lot of similarity. It looks even more like Example 5-2, except that the association mapping has been moved inside a new composite-element mapping, lines 38-45. This element introduces the new AlbumTrack object we use to group the disc, position, and Track link needed to organize an album's tracks. Also, rather than being a many-to-many mapping (because an album generally has multiple tracks, and a given track file might be shared between several albums), the association between AlbumTrack and Track on line 39 is many-to-one: several AlbumTrack objects (from different albums) might refer to the same Track file if we're trying to save disk space, but each AlbumTrack object is concerned with only one Track. The list tag that contains AlbumTrack is implicitly one-to-many. (If you're still having trouble with these data modeling concepts, don't struggle too hard just now—the source code and schema coming up shortly will hopefully help you see what is happening here.)
Okay, back to this new composite-element definition. It specifies that we want to use a new AlbumTrack class as the values that appear in our Album data bean's tracks list. The body of the composite-element tag defines the properties of AlbumTrack, which group all the information we need about a track on an album. The syntax for these nested properties, lines 39-44, is no different than that of the outer mappings for Album's own properties. They can even include their own nested composite elements, collections, or (as seen here) meta attributes. This gives us tremendous flexibility to set up fine-grained mappings that retain a healthy degree of object-oriented encapsulation.
In our composite AlbumTrack mapping, we are recording an association with the actual Track (lines 39-42) to be played at each position within the Album, as well as the disc on which that track is found (line 43), and, on line 44, this entry's position on that disc (for example, track 3 of disc 2). This achieves the goals we started with and illustrates how arbitrary information can be attached to a collection of associations. The source for the class itself can be found in Example 5-5, and it might help clarify this discussion. Compare this source code with its graphical representation at the bottom of Figure 5-1.
You may have noticed that I chose an explicit column name of TRACK_ID to use for the many-to-one link to the TRACK table (line 41). I've actually been doing this in a number of places, but previously it didn't require an entire separate line. It's worth talking about the reasoning behind this choice. Without this instruction, Hibernate will just use the property name (track) for the column name. You can use any names you want for your columns, but Java Database Best Practices encourages naming foreign key columns the same as the primary keys in the original tables to which they refer. This helps data modeling tools recognize and display the 'natural joins' the foreign keys represent, which makes it easier for people to understand and work with the data. This consideration is also why I included the table names as part of the primary keys' column names.
5.3.2 What just happened?
I was all set to explain that by choosing to use a composite element to encapsulate our augmented track list, we'd have to write the Java source for AlbumTrack ourselves. I was sure this went far beyond the capabilities of the code generation tool. Much to my delight, when I tried ant codegen to see what sort of errors would result, the command reported success, and both Album.java and AlbumTrack.java appeared in the source directory!
NOTE
Sometimes it's nice to be proved wrong.
It was at this point that I went back and added the use-in-tostring meta attribute for the track many-to-one mapping inside the component. I wasn't sure this would work either, because the only examples of its use I've found in the reference manual are attached to actual property tags. But work it did, exactly as I hoped.
The Hibernate best practices encourage using fine-grained classes and mapping them as components. Given how easily the code generation tool allows you to create them from your mapping documents, there is absolutely no excuse for ignoring this advice. Example 5-5 shows the source generated for our nested composite mapping.
Example 5-5. Code generated for AlbumTrack.java
package com.oreilly.hh;
import java.io.Serializable;
import org.apache.commons.lang.builder.ToStringBuilder;
/**
* Represents an album in the music database, an organized list of tracks.
* @author Jim Elliott (with help from Hibernate)
*
*/
public class AlbumTrack implements Serializable {
/** nullable persistent field */
private int disc;
/** nullable persistent field */
private int positionOnDisc;
/** nullable persistent field */
private com.oreilly.hh.Track track;
/** full constructor */
public AlbumTrack(int disc, int positionOnDisc, com.oreilly.hh.Track track) {
this.disc = disc;
this.positionOnDisc = positionOnDisc;
this.track = track;
}
/** default constructor */
public AlbumTrack() {
}
public int getDisc() {
return this.disc;
}
public void setDisc(int disc) {
this.disc = disc;
}
public int getPositionOnDisc() {
return this.positionOnDisc;
}
public void setPositionOnDisc(int positionOnDisc) {
this.positionOnDisc = positionOnDisc;
}
public com.oreilly.hh.Track getTrack() {
return this.track;
}
public void setTrack(com.oreilly.hh.Track track) {
this.track = track;
}
public String toString() {
return new ToStringBuilder(this)
.append("track", getTrack())
.toString();
}
}
This looks similar to the generated code for entities we've seen in previous chapters, but it lacks an id property, which makes sense. Component classes don't need identifier fields, and they need not implement any special interfaces. The class JavaDoc is shared with the Album class, in which this component is used. The source of the Album class itself is a typical generated entity, so there's no need to reproduce it here.
At this point we can build the schema for these new mappings, via ant schema . Example 5-6 shows highlights of the resulting schema creation process. This is the concrete HSQLDB representation of the schema modeled at the top of Figure 5-1.
Example 5-6. Additions to the schema caused by our new Album mapping
[schemaexport] create table ALBUM (
[schemaexport] ALBUM_ID INTEGER NOT NULL IDENTITY,
[schemaexport] TITLE VARCHAR(255) not null,
[schemaexport] numDiscs INTEGER,
[schemaexport] added DATE
[schemaexport] )
...
[schemaexport] create table ALBUM_COMMENTS (
[schemaexport] ALBUM_ID INTEGER not null,
[schemaexport] COMMENT VARCHAR(255)
[schemaexport] )
...
[schemaexport] create table ALBUM_ARTISTS (
[schemaexport] ALBUM_ID INTEGER not null,
[schemaexport] ARTIST_ID INTEGER not null,
[schemaexport] primary key (ALBUM, ARTIST)
[schemaexport] )
...
[schemaexport] create table ALBUM_TRACKS (
[schemaexport] ALBUM_ID INTEGER not null,
[schemaexport] TRACK_ID INTEGER,
[schemaexport] disc INTEGER,
[schemaexport] positionOnDisc INTEGER,
[schemaexport] POSITION INTEGER not null,
[schemaexport] primary key (ALBUM_ID, POSITION)
[schemaexport] )
...
[schemaexport] create index ALBUM_TITLE on ALBUM (title)
...
[schemaexport] alter table ALBUM_COMMENTS add constraint FK1E2C21E43B7864F
foreign key (ALBUM_ID) references ALBUM
...
[schemaexport] alter table ALBUM_ARTISTS add constraint FK7BA403FC3B7864F foreign
key (ALBUM_ID) references ALBUM
...
[schemaexport] alter table ALBUM_TRACKS add constraint FKD1CBBC783B7864F foreign
key (ALBUM_ID) references ALBUM
...
[schemaexport] alter table ALBUM_TRACKS add constraint FKD1CBBC78697F14B foreign
key (TRACK_ID) references TRACK
You may find that making radical changes to the schema causes problems for Hibernate or the HSQLDB driver. When I switched between the above two approaches for mapping album tracks, I ran into trouble because the first set of mappings established database constraints that Hibernate didn't know to drop before trying to build the revised schema. This prevented it from dropping and recreating some tables. If this ever happens to you, you can delete the database file (music.script in the data directory) and start from scratch, which should work fine.
Figure 5-2 shows our enriched schema in HSQLDB's graphical management interface.
Figure 5-2. The schema with album-related tables
You might wonder why we use the separate Track class at all, rather than simply embedding all that information directly in our enhanced AlbumTracks collection. The simple answer is that not all tracks are part of an album—some might be singles, downloads, or otherwise independent. Given that we need a separate table to keep track of these anyway, it would be a poor design choice to duplicate its contents in the AlbumTracks table rather than associating with it. There is also a more subtle advantage to this approach, which is actually used in my own music database: this structure allows us to share a single track file between multiple albums. If the same song appears on an album, a 'best of' collection, and one or more period collections or sound tracks, linking all these albums to the same track file saves disk space.
Let's look at some sample code showing how to use these new data objects. Example 5-7 shows a class that creates an album record and its list of tracks, then prints it out to test the debugging; support we've configured for the toString() method.
Example 5-7. Source of AlbumTest.java
1 package com.oreilly.hh;
2
3 import net.sf.hibernate.*;
4 import net.sf.hibernate.cfg.Configuration;
5
6 import java.sql.Time;
7 import java.util.*;
8
9 /**
10 * Create sample album data, letting Hibernate persist it for us.
11 */
12 public class AlbumTest {
13
14 /**
15 * Quick and dirty helper method to handle repetitive portion of creating
16 * album tracks. A real implementation would have much more flexibility.
17 */
18 private static void addAlbumTrack(Album album, String title, String file,
19 Time length, Artist artist, int disc,
20 int positionOnDisc, Session session)
21 throws HibernateException
22 {
23 Track track = new Track(title, file, length, new Date(), (short)0,
24 new HashSet(), new HashSet());
25 track.getArtists().add(artist);
26 session.save(track);
27 album.getTracks().add(new AlbumTrack(disc, positionOnDisc, track));
28 }
29
30 public static void main(String args[]) throws Exception {
31 // Create a configuration based on the properties file we've put
32 // in the standard place.
33 Configuration config = new Configuration();
34
35 // Tell it about the classes we want mapped.
36 config.addClass(Track.class).addClass(Artist.class);
37 config.addClass(Album.class);
38
39 // Get the session factory we can use for persistence
40 SessionFactory sessionFactory = config.buildSessionFactory();
41
42 // Ask for a session using the JDBC information we've configured
43 Session session = sessionFactory.openSession();
44 Transaction tx = null;
45 try {
46 // Create some data and persist it
47 tx = session.beginTransaction();
48
49 Artist artist = CreateTest.getArtist("Martin L. Gore", true,
50 session);
51 List albumTracks = new ArrayList(5);
52 Album album = new Album("Counterfeit e.p.", 1, new Date(),
53 albumTracks, new HashSet(), new HashSet());
54 album.getArtists().add(artist);
55 session.save(album);
56
57 addAlbumTrack(album, "Compulsion", "vol1/album83/track01.mp3",
58 Time.valueOf("00:05:29"), artist, 1, 1, session);
59 addAlbumTrack(album, "In a Manner of Speaking",
60 "vol1/album83/track02.mp3", Time.valueOf("00:04:21"),
61 artist, 1, 2, session);
62 addAlbumTrack(album, "Smile in the Crowd",
63 "vol1/album83/track03.mp3", Time.valueOf("00:05:06"),
64 artist, 1, 3, session);
65 addAlbumTrack(album, "Gone", "vol1/album83/track04.mp3",
66 Time.valueOf("00:03:32"), artist, 1, 4, session);
67 addAlbumTrack(album, "Never Turn Your Back on Mother Earth",
68 "vol1/album83/track05.mp3", Time.valueOf("00:03:07"),
69 artist, 1, 5, session);
70 addAlbumTrack(album, "Motherless Child", "vol1/album83/track06.mp3",
71 Time.valueOf("00:03:32"), artist, 1, 6, session);
72
73 System.out.println(album);
74
75 // We're done; make our changes permanent
76 tx.commit();
77
78 } catch (Exception e) {
79 if (tx != null) {
80 // Something went wrong; discard all partial changes
81 tx.rollback();
82 }
83 throw e;
84 } finally {
85 // No matter what, close the session
86 session.close();
87 }
88
89 // Clean up after ourselves
90 sessionFactory.close();
91 }
92 }
The addAlbumTrack() method starting on line 14 creates and persists a Track object given the specified parameters, associates it with a single Artist (line 25), then adds it to the supplied Album, recording the disc it's on and its position within that disc (line 27). In this simple example we're creating an album with just one disc. This quick-and-dirty method can't cope with many variations, but it does allow the example to be compressed nicely.
We also need a new target at the end of build.xml to invoke the class. Add the lines of Example 5-8 at the end of the file (but inside the project tag, of course).
Example 5-8. New target to run our album test class
<target name="atest" description="Creates and persists some album data"
depends="compile">
<java classname="com.oreilly.hh.AlbumTest" fork="true">
<classpath refid="project.class.path"/>
</java>
</target>
With this in place, assuming you've generated the schema, run ant ctest followed by ant atest . (Running ctest first is optional, but having some extra data in there to begin with makes the album data somewhat more interesting. Recall that you can run these targets in one command as ant ctest atest , and if you want to start by erasing the contents of the database first, you can invoke ant schema ctest atest .) The debugging output produced by this command is shown in Example 5-9. Although admittedly cryptic, you should be able to see that the album and tracks have been created, and the order of the tracks has been maintained.
Example 5-9. Output from running the album test
atest:
[java] com.oreilly.hh.Album@863cc1[id=0,title=Counterfeit e.p.,tracks=[com.
oreilly.hh.AlbumTrack@b3cc96[track=com.oreilly.hh.
Track@fea539[id=7,title=Compulsion]], com.oreilly.hh.AlbumTrack@3ca972[track=com.
oreilly.hh.Track@f2e328[id=8,title=In a Manner of Speaking]], com.oreilly.hh.
AlbumTrack@98a1f4[track=com.oreilly.hh.Track@1f6c18[id=9,title=Smile in the
Crowd]], com.oreilly.hh.AlbumTrack@b0d990[track=com.oreilly.hh.
Track@f1cdfb[id=10,title=Gone]], com.oreilly.hh.AlbumTrack@9baf0b[track=com.
oreilly.hh.Track@a59d2[id=11,title=Never Turn Your Back on Mother Earth]], com.
oreilly.hh.AlbumTrack@10c69[track=com.oreilly.hh.
Track@8f1ed7[id=12,title=Motherless Child]]]]
If we run our old query test, we can see both the old and new data, as in Example 5-10.
Example 5-10. All tracks are less than seven minutes long, whether from albums or otherwise
% ant qtest
Buildfile: build.xml
...
qtest:
[java] Track: "Russian Trance" (PPK) 00:03:30
[java] Track: "Video Killed the Radio Star" (The Buggles) 00:03:49
[java] Track: "Gravity's Angel" (Laurie Anderson) 00:06:06
[java] Track: "Adagio for Strings (Ferry Corsten Remix)" (Ferry Corsten,
William Orbit, Samuel Barber) 00:06:35
[java] Track: "Test Tone 1" 00:00:10
[java] Comment: Pink noise to test equalization
[java] Track: "Compulsion" (Martin L. Gore) 00:05:29
[java] Track: "In a Manner of Speaking" (Martin L. Gore) 00:04:21
[java] Track: "Smile in the Crowd" (Martin L. Gore) 00:05:06
[java] Track: "Gone" (Martin L. Gore) 00:03:32
[java] Track: "Never Turn Your Back on Mother Earth" (Martin L. Gore) 00:03:07
[java] Track: "Motherless Child" (Martin L. Gore) 00:03:32
BUILD SUCCESSFUL
Total time: 12 seconds
5.4 Lifecycle Associations
Hibernate is completely responsible for managing the ALBUM_TRACKS table, adding and deleting rows (and, if necessary, renumbering POSITION values) as entries are added to or removed from Album beans' tracks properties. You can test this by writing a test program to delete the second track from our test album and see the result. A very quick and dirty way to do this would be to add the following four lines (see Example 5-11) right after the existing tx.commit() line in Example 5-7 and then run ant schema ctest atest db .
Example 5-11. Deleting our album's second track
tx = session.beginTransaction();
album.getTracks().remove(1);
session.update(album);
tx.commit();
Doing so changes the contents of ALBUM_TRACKS as shown in Figure 5-4 (compare this with the original contents in Figure 5-3). The second record has been removed (remember that Java list elements are indexed starting with zero), and POSITION has been adjusted so that it retains its consecutive nature, corresponding to the indices of the list elements (the values you'd use when calling tracks.get()).
Figure 5-4. Album track associations after deleting our album's second track
This happens because Hibernate understands that this list is 'owned' by the Album record, and that the 'lifecycles' of the two objects are intimately connected. This notion of lifecycle becomes more clear if you consider what happens if the entire Album is deleted: all of the associated records in ALBUM_TRACKS will be deleted as well. (Go ahead and modify the test program to try this if you're not convinced.)
Contrast this with the relationship between the ALBUM table and the TRACK table. Tracks are sometimes associated with albums, but they are sometimes independent. Removing a track from the list got rid of a row in ALBUM_TRACKS, eliminating the link between the album and track, but didn't get rid of the row in TRACK, so it didn't delete the persistent Track object itself. Similarly, deleting the Album would eliminate all the associations in the collection, but none of the actual Tracks. It's the responsibility of our code to take care of that when appropriate (probably after consulting the user, in case any of the track records might be shared across multiple albums, as discussed above).
If we don't need the flexibility of sharing the same track between albums—disk space is pretty cheap lately given the size of compressed audio—we can let Hibernate manage the TRACK records for the album in the same way it does the ALBUM_TRACKS collection. It won't assume it should do this, because Track and Album objects can exist independently, but we can establish a lifecycle relationship between them in the album mapping document.
NOTE
By now you're probably not surprised there's a way to automate this.
5.4.1 How do I do that?
Example 5-12 shows (in bold) the changes we'd make to the tracks property mapping in Album.hbm.xml.
Example 5-12. Establishing a lifecycle relationship between an album and its tracks
<list name="tracks" table="ALBUM_TRACKS" cascade="all">
<meta attribute="use-in-tostring">true</meta>
<key column="ALBUM_ID"/>
<index column="POSITION"/>
<composite-element class="com.oreilly.hh.AlbumTrack">
<many-to-one name="track" class="com.oreilly.hh.Track" cascade="all">
<meta attribute="use-in-tostring">true</meta>
<column name="TRACK_ID"/>
</many-to-one>
<property name="disc" type="integer"/>
<property name="positionOnDisc" type="integer"/>
</composite-element>
</list>
The cascade attribute tells Hibernate that you want operations performed on a 'parent' object to be transitively applied to its 'child' or 'dependent' objects. It's applicable to all forms of collections and associations. There are several possible values to choose among. The most common are none (the default), save-update, delete, and all (which combines save-update and delete). You can also change the default from none to save-update throughout your entire mapping document by supplying a default-cascade attribute in the hibernate-mapping tag itself.
In our example, we want the tracks owned by an album to be automatically managed by the album, so that when we delete the album, its tracks are deleted. Note that we need to apply the cascade attribute both to the tracks collection and its constituent track element to achieve this. Also, by using a cascade value of all, we eliminate the need to explicitly save any Track objects we create for the album—the addAlbumTrack() method of Example 5-7 no longer needs the line:
session.save(track);
By telling Hibernate that it's fully responsible for the relationship between an album and its track, we enable it to persist tracks when they're added to the album as well as delete them when the album itself is deleted.
Delegating this sort of bookkeeping to the mapping layer can be very convenient, freeing you to focus on more abstract and important tasks, so it is worth using when appropriate. It's reminiscent of the liberation provided by Java's pervasive garbage collection, but it can't be as comprehensive because there is no definitive way to know when you're finished with persistent data by performing reachability analysis; you need to indicate it by calling delete() and establishing lifecycle connections. The trade-off between flexibility and simple automation is yours to make, based on the nature of your data and the needs of your project.
Hibernate's management of lifecycle relationships is not foolproof—or perhaps it's more accurate to say it's not all-encompassing. For example, if you use Collections methods to remove a Track from an Album's tracks property, this breaks the link between the Album and Track but does not actually delete the Track record. Even if you later delete the entire Album, this Track will remain, because it wasn't linked to the Album at the time that it was deleted. Try some of these experiments by modifying AlbumTest.java appropriately and look at the resulting data in the tables!
5.5 Reflexive Associations
It's also possible for objects and tables to have associations back to themselves. This supports persistent recursive data structures like trees, in which nodes link to other nodes. Tracing through a database table storing such relationships using a SQL query interface is a major chore. Luckily, once it's mapped to Java objects, the process is much more readable and natural.
One way we might use a reflexive link in our music database is to allow alternate names for artists. This is useful more often than you might expect, because it makes it very easy to let the user find either 'The Smiths' or 'Smiths, The' depending on how they're thinking of the group, with little code, and in a language-independent way.
NOTE
I mean human language here, English versus Spanish or something else. Put the links in the data rather than trying to write tricky code to guess when an artist name should be permuted.
5.5.1 How do I do that?
All that's needed is to add another field to the Artist mapping in Artist.hbm.xml, establishing a link back to Artist. Example 5-13 shows one option.
Example 5-13. Supporting a reflexive association in the Artist class
<many-to-one name="actualArtist" class="com.oreilly.hh.Artist">
<meta attribute="use-in-tostring">true</meta>
</many-to-one>
This gives us an actualArtist property that we can set to the id of the 'definitive' Artist record when we're setting up an alternate name. For example, our 'The Smiths' record might have id 5, and its actualArtist field would be null since it is definitive. Then we can create an 'alias' Artist record with the name 'Smiths, The' at any time, and set the actualArtist field in that record to point to record 5.
This kind of reflexive link is one instance where a column containing a foreign key can't be named the same as the key column to which it is a link. We are associating a row in ARTIST with another row in ARTIST, and of course the table already has a column named ARTIST_ID.
Why is this association set up as many-to-one? There might be many alias records that point to one particular definitive Artist. So each nickname needs to store the id of the actual artist record for which it is an alternative name. This is, in the language of data modeling, a many-to-one relationship.
Code that looks up artists just needs to check the actualArtist property before returning. If it's null, all is well. Otherwise it should return the record indicated by actualArtist. Example 5-14 shows how we could extend the getArtist() method in CreateTest to support this new feature (additions are in bold). Notice that the Artist constructor gets a new argument for setting actualArtist.
Example 5-14. Artist lookup method supporting resolution of alternate names
public static Artist getArtist(String name, boolean create,
Session session)
throws HibernateException
{
Query query = session.getNamedQuery(
"com.oreilly.hh.artistByName");
query.setString("name", name);
Artist found = (Artist)query.uniqueResult();
if (found == null && create) {
found = new Artist(name, null, new HashSet());
session.save(found);
}
if (found != null && found.getActualArtist() != null) {
return found.getActualArtist();
}
return found;
}
Hopefully this chapter has given you a feel for the rich and powerful ways you can use associations and collections in Hibernate. As should be obvious from the way you can nest and combine these capabilities, there are far more variations than we can hope to cover in a book like this.
The good news is that Hibernate seems well equipped to handle almost any kind of relationship your application might need, and it can even do the drudge work of building the data classes and database schema for you. This works much more effectively and deeply than I ever expected it would when I started creating these examples.
First rich, then lazy? I suppose that could be a plausible story about someone, as long as it happened in that order. But this really is an object relational mapping topic of some importance. As your data model grows, adding associations between objects and tables, your program gains power, which is great. But you often end up with a large fraction of your objects somehow linked to each other. So what happens when you load one of the objects that is part of a huge interrelated cluster? Since, as you've seen, you can move from one object to its associated objects just by traversing properties, it seems you'd have to load all the associated objects when you load any of them. For small databases this is fine, but in general your database can't hold a lot more than the memory available to your program. Uh oh! And even if it does all fit, rarely will you actually access most of those objects, so it's a waste to load them all.
Luckily, this problem was anticipated by the designers of object/relational mapping software, including Hibernate. The trick is to configure some associations to be 'lazy,' so that associated objects aren't loaded until they're actually referenced. Hibernate will instead make a note of the linked object's identity and put off loading it until you actually try to access it. This is often done for collections like those we've been using.
5.1.1 How do I do that?
With collections, all you need to do is set the lazy attribute in the mapping declaration. For example, our track artists mapping could look like Example 5-1.
Example 5-1. Lazily initializing the track artist associations
<set name="artists" table="TRACK_ARTISTS" lazy="true">
<key column="TRACK"/>
<many-to-many class="com.oreilly.hh.Artist" column="ARTIST"/>
</set>
This would tell Hibernate to use a special lazy implementation of Set that doesn't load its contents from the database until you actually try to use them. This is done completely transparently, so you don't even notice it's taking place in your code.
Well, if it's that simple, and avoids problems with loading giant snarls of interrelated objects, why not do it all the time? The problem is that the transparency breaks down once you've closed your Hibernate session. At that point, if you try to access content from a lazy collection that hasn't been initialized (even if you've assigned the collection to a different variable, or returned it from a method call), the Hibernate-provided proxy collection can no longer access the database to perform the deferred loading of its contents, and it is forced to throw a LazyInitializationException.
NOTE
Conservation of complexity seems almost like a law of thermodynamics.
Because this can lead to unexpected crashes far away from the Hibernatespecific code, lazy initialization is turned off by default. It's your responsibility to think carefully about situations in which you need to use it, and ensure that you are doing so safely. The Hibernate reference manual goes into a bit of detail about strategies to consider.
5.1.2 What about...
...Laziness outside of collections? Caching and clustering?
It's easy to see how lazy collections can be supported, since Hibernate can provide its own special implementations of the various Collection interfaces. But what about other kinds of associations? They might benefit from on-demand loading as well.
In fact, Hibernate does support this, and almost as easily (at least from our perspective as users of its services). The way you set this up is by marking an entire persistent class as lazy="true" (this attribute goes right in the class tag of the mapping document). When you do this, Hibernate will generate a proxy class that extends (and poses as) your data class. This lazy proxy puts off actually loading the data until it is needed. Any other objects with associations to the lazy class will sneakily be given these proxy objects, rather than references to your actual data object. The first time any of the methods of the proxy object are used, it will load the real data object and delegate the method call to it. Once the data object is loaded, the proxy simply continues delegating all method calls to it.
If you want to get fancier, you can specify a specific class (or interface) to be extended (or implemented) by the proxy class, using the proxy attribute. The lazy attribute is shorthand for specifying the persistent class itself as the type to be proxied. (If this is all incomprehensible, don't worry, that just means you don't yet need this capability. By the time you do, you'll understand it!)
Naturally, the same caveats about taking care to load anything you'll need to use before closing the session apply to this kind of lazy initialization too. If you need it, you can use it, but do so with care and planning.
The Hibernate reference documentation discusses these considerations in more depth in its chapter 'Improving Performance.' Also introduced there is the fact that Hibernate can be integrated with JVM-level or even clustered object caches to boost the performance of large, distributed applications, by reducing the bottleneck of database access. When plugged in to such a cache, the mapping document lets you configure the cache behavior of classes and associations using (appropriately enough) cache tags. These configurations go beyond what we cover in this notebook, but you should be aware that they're possible in case your application would benefit from them.
NOTE
Oh, right, that's what we were going to try...
5.2 Ordered Collections
Our first goal is to store the tracks that make up an album, keeping them in the right order. Later we'll add information like the disc on which a track is found, and its position on that disc, so we can gracefully handle multi-disc albums.
5.2.1 How do I do that?
The task of keeping a collection in a particular order is actually straightforward. If that's all we cared about in organizing album tracks, we'd need only tell Hibernate to map a List or array. In our Album mapping we'd use something like Example 5-2.
Example 5-2. Simple ordered mapping of tracks for an album
<list name="tracks" table="ALBUM_TRACKS">
<key column="ALBUM_ID"/>
<index column="POSITION"/>
<many-to-many class="com.oreilly.hh.Track" column="TRACK_ID"/>
</list>
This is very much like the set mappings we've used so far (although it uses a different tag to indicate it's an ordered list and therefore maps to a java.util.List). But notice that we also need to add an index tag to establish the ordering of the list, and we need to add a column to hold the value controlling the ordering in the database. Hibernate will manage the contents of this column for us, and use it to ensure that when we get the list out of the database in the future, its contents will be in the same order in which we stored them. The column is created as an integer, and if possible, it is used as part of a composite key for the table. The mapping in Example 5-2, when used to generate a HSQLDB database schema, produces the table shown in Example 5-3.
Example 5-3. Our simple track list realized as an HSQLDB schema
[schemaexport] create table ALBUM_TRACKS (
[schemaexport] ALBUM_ID INTEGER not null,
[schemaexport] TRACK_ID INTEGER not null,
[schemaexport] POSITION INTEGER not null,
[schemaexport] primary key (ALBUM_ID, POSITION)
[schemaexport] )
It's important to understand why the POSITION column is necessary. We need to control the order in which tracks appear in an album, and there aren't any properties of the tracks themselves we can use to keep them sorted in the right order. (Imagine how annoyed you'd be if your jukebox system could only play the tracks of an album in, say, alphabetical order, regardless of the intent of the artists who created it!) The fundamental nature of relational database systems is that you get results in whatever order the system finds convenient, unless you tell it how to sort them. The POSITION column gives Hibernate a value under its control that can be used to ensure that our list is always sorted in the order in which we created it. Another way to think about this is that the order of the entries is one of the independent pieces of information we want to keep track of, so Hibernate needs a place to store it.
The corollary is also important. If there are values in your data that provide a natural order for traversal, there is no need for you to provide an index column; you don't even have to use a list. The set and map collection mappings can be configured to be sorted in Java by providing a sort attribute, or within the database itself by providing a SQL order-by attribute.[5.1] In either case, when you iterate over the contents of the collection, you'll get them in the specified order.
[5.1] The order-by attribute and SQL sorting of collections is only available if you're using Version 1.4 or later of the Java SDK, since it relies on the LinkedHashSet or LinkedHashMap classes introduced in that release.
The values in the POSITION column will always be the same values you'd use as an argument to the tracks.get() method in order to obtain the value at a particular position in the tracks list.
5.3 Augmenting Associations in Collections
All right, we've got a handle on what we need to do if we want our albums' tracks to be kept in the right order. What about the additional information we'd like to keep, such as the disc on which the track is found? When we map a collection of associations, we've seen that Hibernate creates a join table in which to store the relationships between objects. And we've just seen how to add an index column to the ALBUM_TRACKS table to maintain an ordering for the collection. Ideally, we'd like the ability to augment that table with more information of our own choosing, in order to record the other details we'd like to know about album tracks.
As it turns out, we can do just that, and in a very straightforward way.
5.3.1 How do I do that?
Up until this point we've seen two ways of getting tables into our database schema. The first was by explicitly mapping properties of a Java object onto columns of a table. The second was defining a collection of associations, and specifying the table and columns used to manage that collection. As it turns out, there's nothing that prevents us from using a single table in both ways. Some of its columns can be used directly to map to our own objects' properties, while the others can manage the mapping of a collection. This lets us achieve our goals of recording the tracks that make up an album in an ordered way, augmented by additional details to support multi-disc albums.
NOTE
This flexibility took a little getting used to but it makes sense, especially if you think about mapping objects to an existing database schema.
We'll want a new data object, AlbumTrack, to contain information about how a track is used on an album. Since we've already seen several examples of how to map full-blown entities with independent existence, and there really isn't a need for our AlbumTrack object to exist outside the context of an Album entity, this is a good opportunity to look at mapping a component. Recall that in Hibernate jargon an entity is an object that stands on its own in the persistence mechanism: it can be created, queried, and deleted independently of any other objects, and therefore has its own persistent identity (as reflected by its mandatory id property). A component, in contrast, is an object that can be saved to and retrieved from the database, but only as a subordinate part of some other entity. In this case, we'll define a list of AlbumTrack objects as a component part of our Album entity. Example 5-4 shows a mapping for the Album class that achieves this.
Example 5-4. Album.hbm.xml, the mapping definition for an Album
1 <?xml version="1.0"?>
2 <!DOCTYPE hibernate-mapping PUBLIC "-//Hibernate/Hibernate Mapping DTD 2.0//EN"
3 "http://hibernate.sourceforge.net/hibernate-mapping-2.0.dtd">
4
5 <hibernate-mapping>
6 <class name="com.oreilly.hh.Album" table="ALBUM">
7 <meta attribute="class-description">
8 Represents an album in the music database, an organized list of tracks.
9 @author Jim Elliott (with help from Hibernate)
10 </meta>
11
12 <id name="id" type="int" column="ALBUM_ID">
13 <meta attribute="scope-set">protected</meta>
14 <generator class="native"/>
15 </id>
16
17 <property name="title" type="string">
18 <meta attribute="use-in-tostring">true</meta>
19 <column name="TITLE" not-null="true" index="ALBUM_TITLE"/>
20 </property>
21
22 <property name="numDiscs" type="integer"/>
23
24 <set name="artists" table="ALBUM_ARTISTS">
25 <key column="ALBUM_ID"/>
26 <many-to-many class="com.oreilly.hh.Artist" column="ARTIST_ID"/>
27 </set>
28
29 <set name="comments" table="ALBUM_COMMENTS">
30 <key column="ALBUM_ID"/>
31 <element column="COMMENT" type="string"/>
32 </set>
33
34 <list name="tracks" table="ALBUM_TRACKS">
35 <meta attribute="use-in-tostring">true</meta>
36 <key column="ALBUM_ID"/>
37 <index column="POSITION"/>
38 <composite-element class="com.oreilly.hh.AlbumTrack">
39 <many-to-one name="track" class="com.oreilly.hh.Track">
40 <meta attribute="use-in-tostring">true</meta>
41 <column name="TRACK_ID"/>
42 </many-to-one>
43 <property name="disc" type="integer"/>
44 <property name="positionOnDisc" type="integer"/>
45 </composite-element>
46 </list>
47
48 <property name="added" type="date">
49 <meta attribute="field-description">When the album was created</meta>
50 </property>
51
52 </class>
53 </hibernate-mapping>
A lot of this is similar to mappings we've seen before, but the tracks list (starting on line 34) is worth some careful examination. The discussion gets involved, so let's step back a minute and recall exactly what we're trying to accomplish.
We want our album to keep an ordered list of the tracks that make it up, along with additional information about each track that tells which disc it's on (in case the album has multiple discs) and the track's position within the disc. This conceptual relationship is shown in the middle of Figure 5-1. The association between albums and tracks is mediated by an 'Album Tracks' object that adds disc and position information, as well as keeping them in the right order. The model of the tracks themselves is familiar (we're leaving out artist and comment information in this diagram, in an effort to keep it simpler). This model is what we've captured in the album mapping document, Example 5-4. Let's examine the details of how it was done. Later we'll look at how Hibernate turns this specification into Java code (the bottom part of Figure 5-1) and a database schema (the top part).
Figure 5-1. Models of the tables, concepts, and objects involved in representing album tracks
If you compare lines 34-46 of Example 5-4 with one of the set mappings in the preceding chapter, you'll see a lot of similarity. It looks even more like Example 5-2, except that the association mapping has been moved inside a new composite-element mapping, lines 38-45. This element introduces the new AlbumTrack object we use to group the disc, position, and Track link needed to organize an album's tracks. Also, rather than being a many-to-many mapping (because an album generally has multiple tracks, and a given track file might be shared between several albums), the association between AlbumTrack and Track on line 39 is many-to-one: several AlbumTrack objects (from different albums) might refer to the same Track file if we're trying to save disk space, but each AlbumTrack object is concerned with only one Track. The list tag that contains AlbumTrack is implicitly one-to-many. (If you're still having trouble with these data modeling concepts, don't struggle too hard just now—the source code and schema coming up shortly will hopefully help you see what is happening here.)
Okay, back to this new composite-element definition. It specifies that we want to use a new AlbumTrack class as the values that appear in our Album data bean's tracks list. The body of the composite-element tag defines the properties of AlbumTrack, which group all the information we need about a track on an album. The syntax for these nested properties, lines 39-44, is no different than that of the outer mappings for Album's own properties. They can even include their own nested composite elements, collections, or (as seen here) meta attributes. This gives us tremendous flexibility to set up fine-grained mappings that retain a healthy degree of object-oriented encapsulation.
In our composite AlbumTrack mapping, we are recording an association with the actual Track (lines 39-42) to be played at each position within the Album, as well as the disc on which that track is found (line 43), and, on line 44, this entry's position on that disc (for example, track 3 of disc 2). This achieves the goals we started with and illustrates how arbitrary information can be attached to a collection of associations. The source for the class itself can be found in Example 5-5, and it might help clarify this discussion. Compare this source code with its graphical representation at the bottom of Figure 5-1.
You may have noticed that I chose an explicit column name of TRACK_ID to use for the many-to-one link to the TRACK table (line 41). I've actually been doing this in a number of places, but previously it didn't require an entire separate line. It's worth talking about the reasoning behind this choice. Without this instruction, Hibernate will just use the property name (track) for the column name. You can use any names you want for your columns, but Java Database Best Practices encourages naming foreign key columns the same as the primary keys in the original tables to which they refer. This helps data modeling tools recognize and display the 'natural joins' the foreign keys represent, which makes it easier for people to understand and work with the data. This consideration is also why I included the table names as part of the primary keys' column names.
5.3.2 What just happened?
I was all set to explain that by choosing to use a composite element to encapsulate our augmented track list, we'd have to write the Java source for AlbumTrack ourselves. I was sure this went far beyond the capabilities of the code generation tool. Much to my delight, when I tried ant codegen to see what sort of errors would result, the command reported success, and both Album.java and AlbumTrack.java appeared in the source directory!
NOTE
Sometimes it's nice to be proved wrong.
It was at this point that I went back and added the use-in-tostring meta attribute for the track many-to-one mapping inside the component. I wasn't sure this would work either, because the only examples of its use I've found in the reference manual are attached to actual property tags. But work it did, exactly as I hoped.
The Hibernate best practices encourage using fine-grained classes and mapping them as components. Given how easily the code generation tool allows you to create them from your mapping documents, there is absolutely no excuse for ignoring this advice. Example 5-5 shows the source generated for our nested composite mapping.
Example 5-5. Code generated for AlbumTrack.java
package com.oreilly.hh;
import java.io.Serializable;
import org.apache.commons.lang.builder.ToStringBuilder;
/**
* Represents an album in the music database, an organized list of tracks.
* @author Jim Elliott (with help from Hibernate)
*
*/
public class AlbumTrack implements Serializable {
/** nullable persistent field */
private int disc;
/** nullable persistent field */
private int positionOnDisc;
/** nullable persistent field */
private com.oreilly.hh.Track track;
/** full constructor */
public AlbumTrack(int disc, int positionOnDisc, com.oreilly.hh.Track track) {
this.disc = disc;
this.positionOnDisc = positionOnDisc;
this.track = track;
}
/** default constructor */
public AlbumTrack() {
}
public int getDisc() {
return this.disc;
}
public void setDisc(int disc) {
this.disc = disc;
}
public int getPositionOnDisc() {
return this.positionOnDisc;
}
public void setPositionOnDisc(int positionOnDisc) {
this.positionOnDisc = positionOnDisc;
}
public com.oreilly.hh.Track getTrack() {
return this.track;
}
public void setTrack(com.oreilly.hh.Track track) {
this.track = track;
}
public String toString() {
return new ToStringBuilder(this)
.append("track", getTrack())
.toString();
}
}
This looks similar to the generated code for entities we've seen in previous chapters, but it lacks an id property, which makes sense. Component classes don't need identifier fields, and they need not implement any special interfaces. The class JavaDoc is shared with the Album class, in which this component is used. The source of the Album class itself is a typical generated entity, so there's no need to reproduce it here.
At this point we can build the schema for these new mappings, via ant schema . Example 5-6 shows highlights of the resulting schema creation process. This is the concrete HSQLDB representation of the schema modeled at the top of Figure 5-1.
Example 5-6. Additions to the schema caused by our new Album mapping
[schemaexport] create table ALBUM (
[schemaexport] ALBUM_ID INTEGER NOT NULL IDENTITY,
[schemaexport] TITLE VARCHAR(255) not null,
[schemaexport] numDiscs INTEGER,
[schemaexport] added DATE
[schemaexport] )
...
[schemaexport] create table ALBUM_COMMENTS (
[schemaexport] ALBUM_ID INTEGER not null,
[schemaexport] COMMENT VARCHAR(255)
[schemaexport] )
...
[schemaexport] create table ALBUM_ARTISTS (
[schemaexport] ALBUM_ID INTEGER not null,
[schemaexport] ARTIST_ID INTEGER not null,
[schemaexport] primary key (ALBUM, ARTIST)
[schemaexport] )
...
[schemaexport] create table ALBUM_TRACKS (
[schemaexport] ALBUM_ID INTEGER not null,
[schemaexport] TRACK_ID INTEGER,
[schemaexport] disc INTEGER,
[schemaexport] positionOnDisc INTEGER,
[schemaexport] POSITION INTEGER not null,
[schemaexport] primary key (ALBUM_ID, POSITION)
[schemaexport] )
...
[schemaexport] create index ALBUM_TITLE on ALBUM (title)
...
[schemaexport] alter table ALBUM_COMMENTS add constraint FK1E2C21E43B7864F
foreign key (ALBUM_ID) references ALBUM
...
[schemaexport] alter table ALBUM_ARTISTS add constraint FK7BA403FC3B7864F foreign
key (ALBUM_ID) references ALBUM
...
[schemaexport] alter table ALBUM_TRACKS add constraint FKD1CBBC783B7864F foreign
key (ALBUM_ID) references ALBUM
...
[schemaexport] alter table ALBUM_TRACKS add constraint FKD1CBBC78697F14B foreign
key (TRACK_ID) references TRACK
You may find that making radical changes to the schema causes problems for Hibernate or the HSQLDB driver. When I switched between the above two approaches for mapping album tracks, I ran into trouble because the first set of mappings established database constraints that Hibernate didn't know to drop before trying to build the revised schema. This prevented it from dropping and recreating some tables. If this ever happens to you, you can delete the database file (music.script in the data directory) and start from scratch, which should work fine.
Figure 5-2 shows our enriched schema in HSQLDB's graphical management interface.
Figure 5-2. The schema with album-related tables
You might wonder why we use the separate Track class at all, rather than simply embedding all that information directly in our enhanced AlbumTracks collection. The simple answer is that not all tracks are part of an album—some might be singles, downloads, or otherwise independent. Given that we need a separate table to keep track of these anyway, it would be a poor design choice to duplicate its contents in the AlbumTracks table rather than associating with it. There is also a more subtle advantage to this approach, which is actually used in my own music database: this structure allows us to share a single track file between multiple albums. If the same song appears on an album, a 'best of' collection, and one or more period collections or sound tracks, linking all these albums to the same track file saves disk space.
Let's look at some sample code showing how to use these new data objects. Example 5-7 shows a class that creates an album record and its list of tracks, then prints it out to test the debugging; support we've configured for the toString() method.
Example 5-7. Source of AlbumTest.java
1 package com.oreilly.hh;
2
3 import net.sf.hibernate.*;
4 import net.sf.hibernate.cfg.Configuration;
5
6 import java.sql.Time;
7 import java.util.*;
8
9 /**
10 * Create sample album data, letting Hibernate persist it for us.
11 */
12 public class AlbumTest {
13
14 /**
15 * Quick and dirty helper method to handle repetitive portion of creating
16 * album tracks. A real implementation would have much more flexibility.
17 */
18 private static void addAlbumTrack(Album album, String title, String file,
19 Time length, Artist artist, int disc,
20 int positionOnDisc, Session session)
21 throws HibernateException
22 {
23 Track track = new Track(title, file, length, new Date(), (short)0,
24 new HashSet(), new HashSet());
25 track.getArtists().add(artist);
26 session.save(track);
27 album.getTracks().add(new AlbumTrack(disc, positionOnDisc, track));
28 }
29
30 public static void main(String args[]) throws Exception {
31 // Create a configuration based on the properties file we've put
32 // in the standard place.
33 Configuration config = new Configuration();
34
35 // Tell it about the classes we want mapped.
36 config.addClass(Track.class).addClass(Artist.class);
37 config.addClass(Album.class);
38
39 // Get the session factory we can use for persistence
40 SessionFactory sessionFactory = config.buildSessionFactory();
41
42 // Ask for a session using the JDBC information we've configured
43 Session session = sessionFactory.openSession();
44 Transaction tx = null;
45 try {
46 // Create some data and persist it
47 tx = session.beginTransaction();
48
49 Artist artist = CreateTest.getArtist("Martin L. Gore", true,
50 session);
51 List albumTracks = new ArrayList(5);
52 Album album = new Album("Counterfeit e.p.", 1, new Date(),
53 albumTracks, new HashSet(), new HashSet());
54 album.getArtists().add(artist);
55 session.save(album);
56
57 addAlbumTrack(album, "Compulsion", "vol1/album83/track01.mp3",
58 Time.valueOf("00:05:29"), artist, 1, 1, session);
59 addAlbumTrack(album, "In a Manner of Speaking",
60 "vol1/album83/track02.mp3", Time.valueOf("00:04:21"),
61 artist, 1, 2, session);
62 addAlbumTrack(album, "Smile in the Crowd",
63 "vol1/album83/track03.mp3", Time.valueOf("00:05:06"),
64 artist, 1, 3, session);
65 addAlbumTrack(album, "Gone", "vol1/album83/track04.mp3",
66 Time.valueOf("00:03:32"), artist, 1, 4, session);
67 addAlbumTrack(album, "Never Turn Your Back on Mother Earth",
68 "vol1/album83/track05.mp3", Time.valueOf("00:03:07"),
69 artist, 1, 5, session);
70 addAlbumTrack(album, "Motherless Child", "vol1/album83/track06.mp3",
71 Time.valueOf("00:03:32"), artist, 1, 6, session);
72
73 System.out.println(album);
74
75 // We're done; make our changes permanent
76 tx.commit();
77
78 } catch (Exception e) {
79 if (tx != null) {
80 // Something went wrong; discard all partial changes
81 tx.rollback();
82 }
83 throw e;
84 } finally {
85 // No matter what, close the session
86 session.close();
87 }
88
89 // Clean up after ourselves
90 sessionFactory.close();
91 }
92 }
The addAlbumTrack() method starting on line 14 creates and persists a Track object given the specified parameters, associates it with a single Artist (line 25), then adds it to the supplied Album, recording the disc it's on and its position within that disc (line 27). In this simple example we're creating an album with just one disc. This quick-and-dirty method can't cope with many variations, but it does allow the example to be compressed nicely.
We also need a new target at the end of build.xml to invoke the class. Add the lines of Example 5-8 at the end of the file (but inside the project tag, of course).
Example 5-8. New target to run our album test class
<target name="atest" description="Creates and persists some album data"
depends="compile">
<java classname="com.oreilly.hh.AlbumTest" fork="true">
<classpath refid="project.class.path"/>
</java>
</target>
With this in place, assuming you've generated the schema, run ant ctest followed by ant atest . (Running ctest first is optional, but having some extra data in there to begin with makes the album data somewhat more interesting. Recall that you can run these targets in one command as ant ctest atest , and if you want to start by erasing the contents of the database first, you can invoke ant schema ctest atest .) The debugging output produced by this command is shown in Example 5-9. Although admittedly cryptic, you should be able to see that the album and tracks have been created, and the order of the tracks has been maintained.
Example 5-9. Output from running the album test
atest:
[java] com.oreilly.hh.Album@863cc1[id=0,title=Counterfeit e.p.,tracks=[com.
oreilly.hh.AlbumTrack@b3cc96[track=com.oreilly.hh.
Track@fea539[id=7,title=Compulsion]], com.oreilly.hh.AlbumTrack@3ca972[track=com.
oreilly.hh.Track@f2e328[id=8,title=In a Manner of Speaking]], com.oreilly.hh.
AlbumTrack@98a1f4[track=com.oreilly.hh.Track@1f6c18[id=9,title=Smile in the
Crowd]], com.oreilly.hh.AlbumTrack@b0d990[track=com.oreilly.hh.
Track@f1cdfb[id=10,title=Gone]], com.oreilly.hh.AlbumTrack@9baf0b[track=com.
oreilly.hh.Track@a59d2[id=11,title=Never Turn Your Back on Mother Earth]], com.
oreilly.hh.AlbumTrack@10c69[track=com.oreilly.hh.
Track@8f1ed7[id=12,title=Motherless Child]]]]
If we run our old query test, we can see both the old and new data, as in Example 5-10.
Example 5-10. All tracks are less than seven minutes long, whether from albums or otherwise
% ant qtest
Buildfile: build.xml
...
qtest:
[java] Track: "Russian Trance" (PPK) 00:03:30
[java] Track: "Video Killed the Radio Star" (The Buggles) 00:03:49
[java] Track: "Gravity's Angel" (Laurie Anderson) 00:06:06
[java] Track: "Adagio for Strings (Ferry Corsten Remix)" (Ferry Corsten,
William Orbit, Samuel Barber) 00:06:35
[java] Track: "Test Tone 1" 00:00:10
[java] Comment: Pink noise to test equalization
[java] Track: "Compulsion" (Martin L. Gore) 00:05:29
[java] Track: "In a Manner of Speaking" (Martin L. Gore) 00:04:21
[java] Track: "Smile in the Crowd" (Martin L. Gore) 00:05:06
[java] Track: "Gone" (Martin L. Gore) 00:03:32
[java] Track: "Never Turn Your Back on Mother Earth" (Martin L. Gore) 00:03:07
[java] Track: "Motherless Child" (Martin L. Gore) 00:03:32
BUILD SUCCESSFUL
Total time: 12 seconds
5.4 Lifecycle Associations
Hibernate is completely responsible for managing the ALBUM_TRACKS table, adding and deleting rows (and, if necessary, renumbering POSITION values) as entries are added to or removed from Album beans' tracks properties. You can test this by writing a test program to delete the second track from our test album and see the result. A very quick and dirty way to do this would be to add the following four lines (see Example 5-11) right after the existing tx.commit() line in Example 5-7 and then run ant schema ctest atest db .
Example 5-11. Deleting our album's second track
tx = session.beginTransaction();
album.getTracks().remove(1);
session.update(album);
tx.commit();
Doing so changes the contents of ALBUM_TRACKS as shown in Figure 5-4 (compare this with the original contents in Figure 5-3). The second record has been removed (remember that Java list elements are indexed starting with zero), and POSITION has been adjusted so that it retains its consecutive nature, corresponding to the indices of the list elements (the values you'd use when calling tracks.get()).
Figure 5-4. Album track associations after deleting our album's second track
This happens because Hibernate understands that this list is 'owned' by the Album record, and that the 'lifecycles' of the two objects are intimately connected. This notion of lifecycle becomes more clear if you consider what happens if the entire Album is deleted: all of the associated records in ALBUM_TRACKS will be deleted as well. (Go ahead and modify the test program to try this if you're not convinced.)
Contrast this with the relationship between the ALBUM table and the TRACK table. Tracks are sometimes associated with albums, but they are sometimes independent. Removing a track from the list got rid of a row in ALBUM_TRACKS, eliminating the link between the album and track, but didn't get rid of the row in TRACK, so it didn't delete the persistent Track object itself. Similarly, deleting the Album would eliminate all the associations in the collection, but none of the actual Tracks. It's the responsibility of our code to take care of that when appropriate (probably after consulting the user, in case any of the track records might be shared across multiple albums, as discussed above).
If we don't need the flexibility of sharing the same track between albums—disk space is pretty cheap lately given the size of compressed audio—we can let Hibernate manage the TRACK records for the album in the same way it does the ALBUM_TRACKS collection. It won't assume it should do this, because Track and Album objects can exist independently, but we can establish a lifecycle relationship between them in the album mapping document.
NOTE
By now you're probably not surprised there's a way to automate this.
5.4.1 How do I do that?
Example 5-12 shows (in bold) the changes we'd make to the tracks property mapping in Album.hbm.xml.
Example 5-12. Establishing a lifecycle relationship between an album and its tracks
<list name="tracks" table="ALBUM_TRACKS" cascade="all">
<meta attribute="use-in-tostring">true</meta>
<key column="ALBUM_ID"/>
<index column="POSITION"/>
<composite-element class="com.oreilly.hh.AlbumTrack">
<many-to-one name="track" class="com.oreilly.hh.Track" cascade="all">
<meta attribute="use-in-tostring">true</meta>
<column name="TRACK_ID"/>
</many-to-one>
<property name="disc" type="integer"/>
<property name="positionOnDisc" type="integer"/>
</composite-element>
</list>
The cascade attribute tells Hibernate that you want operations performed on a 'parent' object to be transitively applied to its 'child' or 'dependent' objects. It's applicable to all forms of collections and associations. There are several possible values to choose among. The most common are none (the default), save-update, delete, and all (which combines save-update and delete). You can also change the default from none to save-update throughout your entire mapping document by supplying a default-cascade attribute in the hibernate-mapping tag itself.
In our example, we want the tracks owned by an album to be automatically managed by the album, so that when we delete the album, its tracks are deleted. Note that we need to apply the cascade attribute both to the tracks collection and its constituent track element to achieve this. Also, by using a cascade value of all, we eliminate the need to explicitly save any Track objects we create for the album—the addAlbumTrack() method of Example 5-7 no longer needs the line:
session.save(track);
By telling Hibernate that it's fully responsible for the relationship between an album and its track, we enable it to persist tracks when they're added to the album as well as delete them when the album itself is deleted.
Delegating this sort of bookkeeping to the mapping layer can be very convenient, freeing you to focus on more abstract and important tasks, so it is worth using when appropriate. It's reminiscent of the liberation provided by Java's pervasive garbage collection, but it can't be as comprehensive because there is no definitive way to know when you're finished with persistent data by performing reachability analysis; you need to indicate it by calling delete() and establishing lifecycle connections. The trade-off between flexibility and simple automation is yours to make, based on the nature of your data and the needs of your project.
Hibernate's management of lifecycle relationships is not foolproof—or perhaps it's more accurate to say it's not all-encompassing. For example, if you use Collections methods to remove a Track from an Album's tracks property, this breaks the link between the Album and Track but does not actually delete the Track record. Even if you later delete the entire Album, this Track will remain, because it wasn't linked to the Album at the time that it was deleted. Try some of these experiments by modifying AlbumTest.java appropriately and look at the resulting data in the tables!
5.5 Reflexive Associations
It's also possible for objects and tables to have associations back to themselves. This supports persistent recursive data structures like trees, in which nodes link to other nodes. Tracing through a database table storing such relationships using a SQL query interface is a major chore. Luckily, once it's mapped to Java objects, the process is much more readable and natural.
One way we might use a reflexive link in our music database is to allow alternate names for artists. This is useful more often than you might expect, because it makes it very easy to let the user find either 'The Smiths' or 'Smiths, The' depending on how they're thinking of the group, with little code, and in a language-independent way.
NOTE
I mean human language here, English versus Spanish or something else. Put the links in the data rather than trying to write tricky code to guess when an artist name should be permuted.
5.5.1 How do I do that?
All that's needed is to add another field to the Artist mapping in Artist.hbm.xml, establishing a link back to Artist. Example 5-13 shows one option.
Example 5-13. Supporting a reflexive association in the Artist class
<many-to-one name="actualArtist" class="com.oreilly.hh.Artist">
<meta attribute="use-in-tostring">true</meta>
</many-to-one>
This gives us an actualArtist property that we can set to the id of the 'definitive' Artist record when we're setting up an alternate name. For example, our 'The Smiths' record might have id 5, and its actualArtist field would be null since it is definitive. Then we can create an 'alias' Artist record with the name 'Smiths, The' at any time, and set the actualArtist field in that record to point to record 5.
This kind of reflexive link is one instance where a column containing a foreign key can't be named the same as the key column to which it is a link. We are associating a row in ARTIST with another row in ARTIST, and of course the table already has a column named ARTIST_ID.
Why is this association set up as many-to-one? There might be many alias records that point to one particular definitive Artist. So each nickname needs to store the id of the actual artist record for which it is an alternative name. This is, in the language of data modeling, a many-to-one relationship.
Code that looks up artists just needs to check the actualArtist property before returning. If it's null, all is well. Otherwise it should return the record indicated by actualArtist. Example 5-14 shows how we could extend the getArtist() method in CreateTest to support this new feature (additions are in bold). Notice that the Artist constructor gets a new argument for setting actualArtist.
Example 5-14. Artist lookup method supporting resolution of alternate names
public static Artist getArtist(String name, boolean create,
Session session)
throws HibernateException
{
Query query = session.getNamedQuery(
"com.oreilly.hh.artistByName");
query.setString("name", name);
Artist found = (Artist)query.uniqueResult();
if (found == null && create) {
found = new Artist(name, null, new HashSet());
session.save(found);
}
if (found != null && found.getActualArtist() != null) {
return found.getActualArtist();
}
return found;
}
Hopefully this chapter has given you a feel for the rich and powerful ways you can use associations and collections in Hibernate. As should be obvious from the way you can nest and combine these capabilities, there are far more variations than we can hope to cover in a book like this.
The good news is that Hibernate seems well equipped to handle almost any kind of relationship your application might need, and it can even do the drudge work of building the data classes and database schema for you. This works much more effectively and deeply than I ever expected it would when I started creating these examples.
Persistent Enumerated Types
An enumerated type is a common and useful programming abstraction allowing a value to be selected from a fixed set of named choices. These were originally well represented in Pascal, but C took such a minimal approach (essentially just letting you assign symbolic names to interchangeable integer values) that early Java releases reserved C's enum keyword but declined to implement it. A better, object-oriented approach known as the "typesafe enum pattern" evolved and was popularized in Joshua Bloch's Effective Java Programming Language Guide (Addison- Wesley). This approach requires a fair amount of boilerplate coding, but it lets you do all kinds of interesting and powerful things. The Java 1.5 specification resuscitates the enum keyword as an easy way to get the power of typesafe enumerations without all the tedious boilerplate coding, and it provides other nifty benefits.
Regardless of how you implement an enumerated type, you're sometimes going to want to be able to persist such values to a database.
6.1 Defining a Persistent Enumerated Type
NOTE
C-style enumerations still appear too often in Java. Older parts of the Sun API contain many of them.
Hibernate has been around for a while and (at least as of this writing) Java 1.5 isn't yet released, so the support for enumerations in Hibernate can't take advantage of its new enum keyword. Instead, Hibernate lets you define your own typesafe enumeration classes however you like, and it provides a mechanism to help you get them into and out of a database, by translating them to and from small integer values. This is something of a regression to the world of C, but it is useful nonetheless.
In our music database, for example, we might want to add a field to our Track class that tells us the medium from which it was imported.
6.1.1 How do I do that?
The key to adding persistence support for our enumeration is to have it implement Hibernate's PersistentEnum interface. This interface has two methods, toInt() and fromInt(), that Hibernate uses to translate between the enumeration constants and values that represent them in a database.
Let's suppose we want to be able to specify whether our tracks came from cassette tapes, vinyl, VHS tapes, CDs, a broadcast, an internet download site, or a digital audio stream. (We could go really nuts and distinguish between Internet streams and satellite radio services like Sirius or XM, or radio versus television broadcast, but this is plenty to demonstrate the important ideas.)
Without any consideration of persistence, our typesafe enumeration class might look something like Example 6-1. (The JavaDoc has been compressed to take less printed space, but the downloadable version is formatted normally.)
Example 6-1. SourceMedia.java, our initial typesafe enumeration
package com.oreilly.hh;
import java.util.*;
import java.io.Serializable;
/**
* This is a typesafe enumeration that identifies the media on which an
* item in our music database was obtained.
**/
public class SourceMedia implements Serializable {
/** Stores the external name of this instance, by which it can be retrieved. */
private final String name;
/**
* Stores the human-readable description of this instance, by which it is
* identified in the user interface.
*/
private final transient String description;
/**
* Return the external name associated with this instance.
* @return the name by which this instance is identified in code.
**/
public String getName() {
return name;
}
/**
* Return the description associated with this instance.
* @return the human-readable description by which this instance is
* identified in the user interface.
**/
public String getDescription() {
return description;
}
/** Keeps track of all instances by name, for efficient lookup. */
private static final Map instancesByName = new HashMap();
/**
* Constructor is private to prevent instantiation except during class
* loading.
*
* @param name the external name of the message type.
* @param description the human readable description of the message type,
* by which it is presented in the user interface.
*/
private SourceMedia(String name, String description) {
this.name = name;
this.description = description;
// Record this instance in the collection that tracks the enumeration
instancesByName.put(name, this);
}
/** The instance that represents music obtained from cassette tape. */
public static final SourceMedia CASSETTE =
new SourceMedia("cassette", "Audio Cassette Tape");
/** The instance that represents music obtained from vinyl. */
public static final SourceMedia VINYL =
new SourceMedia("vinyl", "Vinyl Record");
/** The instance that represents music obtained from VHS tapes. */
public static final SourceMedia VHS =
new SourceMedia("vhs", "VHS Videocassette Tape");
/** The instance that represents music obtained from a compact disc. */
public static final SourceMedia CD =
new SourceMedia("cd", "Compact Disc");
/** The instance that represents music obtained from a broadcast. */
public static final SourceMedia BROADCAST =
new SourceMedia("broadcast", "Analog Broadcast");
/** The instance that represents music obtained as an Internet download. */
public static final SourceMedia DOWNLOAD =
new SourceMedia("download", "Internet Download");
/** The instance that represents music from a digital audio stream. */
public static final SourceMedia STREAM =
new SourceMedia("stream", "Digital Audio Stream");
/**
* Obtain the collection of all legal enumeration values.
* @return all instances of this typesafe enumeration.
*/
public static Collection getAllValues() {
return Collections.unmodifiableCollection(instancesByName.values());
}
/**
* Look up an instance by name.
*
* @param name the external name of an instance.
* @return the corresponding instance.
* @throws NoSuchElementException if there is no such instance.
*/
public static SourceMedia getInstanceByName(String name) {
SourceMedia result = (SourceMedia)instancesByName.get(name);
if (result == null) {
throw new NoSuchElementException(name);
}
return result;
}
/** Return a string representation of this object. */
public String toString() {
return description;
}
/** Insure that deserialization preserves the signleton property. */
private Object readResolve() {
return getInstanceByName(name);
}
}
To add persistence support for this class, all we need to do is implement the PersistentEnum interface. Unfortunately, this requires us to assign an integer value to each instance, and to provide a way of looking up instances by this integer value. This is the "regression to C" mentioned in the introduction. Most typesafe enumerations with which I've worked have not included such an integer representation, since (as in this example) it was not part of their object-oriented semantics. Still, adding this integer property is not that hard. Example 6-2 shows the revisions we need to make in bold. (To save space, unchanged members and methods and some JavaDoc are omitted from this version of the example; the downloadable version is complete.)
Example 6-2. Changes to SourceMedia.java in order to support persistence using Hibernate
package com.oreilly.hh;
import net.sf.hibernate.PersistentEnum;
import java.util.*;
import java.io.Serializable;
/**
* This is a typesafe enumeration that identifies the media on which an
* item in our music database was obtained.
**/
public class SourceMedia implements PersistentEnum, Serializable {
...
/** Stores the integer value used by Hibernate to persist this instance. */
private final int code;
...
/**
* Return the persistence code associated with this instance, as
* mandated by the {@link PersistentEnum} interface.
*/
public int toInt() {
return code;
}
...
/** Keeps track of all instances by code, for efficient lookup.
private static final Map instancesByCode = new HashMap();
/**
* Constructor is private to prevent instantiation except during class
* loading.
*
* @param name the external name of the message type.
* @param description the human readable description of the message type,
* by which it is presented in the user interface.
* @param code the persistence code by which Hibernate stores the instance.
*/
private SourceMedia(String name, String description, int code) {
this.name = name;
this.description = description;
this.code = code;
// Record this instance in the collections that track the enumeration
instancesByName.put(name, this);
instancesByCode.put(new Integer(code), this);
}
...
public static final SourceMedia CASSETTE =
new SourceMedia("cassette", "Audio Cassette Tape", 0);
...
public static final SourceMedia VINYL =
new SourceMedia("vinyl", "Vinyl Record", 1);
...
public static final SourceMedia VHS =
new SourceMedia("vhs", "VHS Videocassette Tape", 2);
...
public static final SourceMedia CD =
new SourceMedia("cd", "Compact Disc", 3);
...
public static final SourceMedia BROADCAST =
new SourceMedia("broadcast", "Analog Broadcast", 4);
...
public static final SourceMedia DOWNLOAD =
new SourceMedia("download", "Internet Download", 5);
...
public static final SourceMedia STREAM =
new SourceMedia("stream", "Digital Audio Stream", 6);
...
/**
* Look up an instance by code, as specified by the {@link PersistentEnum}
* interface.
*
* @param code the persistence code of an instance.
* @return the corresponding instance.
* @throws NoSuchElementException if there is no such instance.
*/
public static SourceMedia fromInt(int code) {
SourceMedia result =
(SourceMedia)instancesByCode.get(new Integer(code));
if (result == null) {
throw new NoSuchElementException("code=" + code);
}
return result;
}
...
}
An alternative to adding the codes to the constructor arguments is to use a static counter that gets incremented each time a new instance is constructed. Although this is more convenient and concise, it makes it much harder to tell by inspection which code goes with which instance, and it also means you need to be careful to add any new instances to the end of the construction code if you don't want existing values to be rearranged (this is a problem if you've already got values persisted in the database). These are some of the reasons it'd be nicer to avoid the numeric codes completely, and use the symbolic names to represent instances in the database.
NOTE
If you're in too much suspense, rest assured that the next chapter shows a nice way to avoid the need for such numeric codes.
The good news is that once we've got our persistent enum type defined, it's extremely easy to use it. Let's see how!
6.2 Working with Persistent Enumerations
If you were thinking about it, you may have noticed that we never defined a persistence mapping for the SourceMedia class in the first part of this chapter. That's because our persistent enumerated type is a value that gets persisted as part of one or more entities, rather than being an entity unto itself.
In that light, it's not surprising that we've not yet done any mapping. That happens when it's time to actually use the persistent enumeration.
6.2.1 How do I do that?
Recall that we wanted to keep track of the source media for the music tracks in our jukebox system. That means we want to use the SourceMedia enumeration in our Track mapping. We can simply add a new property tag to the class definition in Track.hbm.xml, as shown in Example 6-3.
Example 6-3. Adding the sourceMedia property to the Track mapping document
...
<property name="volume" type="short">
<meta attribute="field-description">How loud to play the track</meta>
</property>
<property name="sourceMedia" type="com.oreilly.hh.SourceMedia">
<meta attribute="field-description">Media on which track was obtained</meta>
<meta attribute="use-in-tostring">true</meta>
</property>
</class>
...
Because the type of our sourceMedia property names a class that implements the PersistentEnum interface, Hibernate knows to persist it using its built-in enumeration support.
With this addition in place, running ant codegen updates our Track class to include the new property. The signature of the full-blown Track constructor now looks like this:
public Track(String title, String filePath, Date playTime, Date added,
short volume, com.oreilly.hh.SourceMedia sourceMedia,
Set artists, Set comments) { ... }
We need to make corresponding changes in CreateTest.java:
Track track = new Track("Russian Trance",
"vol2/album610/track02.mp3",
Time.valueOf("00:03:30"), new Date(),
(short)0, SourceMedia.CD,
new HashSet(), new HashSet());
...
track = new Track("Video Killed the Radio Star",
"vol2/album611/track12.mp3",
Time.valueOf("00:03:49"), new Date(),
(short)0, SourceMedia.VHS,
new HashSet(), new HashSet());
And so on. To match the results shown later, mark the rest as coming from CDs, except for "The World '99" which comes from a stream and give "Test Tone 1" a null sourceMedia value. At this point, run ant schema to rebuild the database schema with support for the new property, and run ant ctest to create the sample data.
6.2.2 What just happened?
Our TRACK table now contains an integer column to store the sourceMedia property. We can see its values by looking at the contents of the table after creating the sample data (the easiest way is to run a query within ant db , as shown in Figure 6-1).
We can verify that the values persisted to the database are correct by cross-checking the codes assigned to our persistent enumeration. Alternately, we can see a more meaningful version of the information by slightly enhancing the query test to print this property for the tracks it retrieves. The necessary changes are in bold in Example 6-4.
Figure 6-1. Source media information in the TRACK table
Example 6-4. Displaying source media in QueryTest.java
...
// Print the tracks that will fit in seven minutes
List tracks = tracksNoLongerThan(Time.valueOf("00:07:00"),
session);
for (ListIterator iter = tracks.listIterator() ;
iter.hasNext() ; ) {
Track aTrack = (Track)iter.next();
String mediaInfo = "";
if (aTrack.getSourceMedia() != null) {
mediaInfo = ", from " +
aTrack.getSourceMedia().getDescription();
}
System.out.println("Track: \"" + aTrack.getTitle() + "\" " +
listArtistNames(aTrack.getArtists()) +
aTrack.getPlayTime() + mediaInfo);
With these enhancements, running ant qtest yields the output shown in Example 6-5. Tracks with non-null source media values now have "from" and the appropriate media description displayed at the end.
Example 6-5. Human-oriented display of source media information
...
qtest:
[java] Track: "Russian Trance" (PPK) 00:03:30, from Compact Disc
[java] Track: "Video Killed the Radio Star" (The Buggles) 00:03:49, from VHS
Videocassette Tape
[java] Track: "Gravity's Angel" (Laurie Anderson) 00:06:06, from Compact Disc
[java] Track: "Adagio for Strings (Ferry Corsten Remix)" (Ferry Corsten,
William Orbit, Samuel Barber) 00:06:35, from Compact Disc
[java] Track: "Test Tone 1" 00:00:10
[java] Comment: Pink noise to test equalization
Note that if we hadn't decided to do our own fancy formatting of a subset of the tracks' properties in QueryTest and instead relied on the toString() method in Track, we'd not have needed to make any changes to QueryTest to see this new information. Our mapping document specified that the sourceMedia property should be included in the toString() result, which would have taken care of it. You can inspect the generated toString() source to check this, or write a simple test program to see what the toString() output looks like. An excellent candidate would be to fix AlbumTest.java so it will compile and run after our changes to Track. The easiest fix is to simply hardcode the addAlbumTrack() method to assume everything comes from CDs, as in Example 6-5 (the JavaDoc already excuses such shameful rigidity).
Example 6-6. Fixing AlbumTest.java to support source media
/**
* Quick and dirty helper method to handle repetitive portion of creating
* album tracks. A real implementation would have much more flexibility.
*/
private static void addAlbumTrack(Album album, String title, String file,
Time length, Artist artist, int disc,
int positionOnDisc, Session session)
throws HibernateException
{
Track track = new Track(title, file, length, new Date(), (short)0,
SourceMedia.CD, new HashSet(), new HashSet());
track.getArtists().add(artist);
// session.save(track);
album.getTracks().add(new AlbumTrack(disc, positionOnDisc, track));
}
With this fix in place, running ant atest shows that the source media information propagates all the way up to Album's own toString() method:
[java] com.oreilly.hh.Album@e0f945[id=0,title=Counterfeit e.p.,
tracks=[com.oreilly.hh.AlbumTrack@1370ab[track=com.oreilly.hh.
Track@49f9fa[id=<null>,title=Compulsion,sourceMedia=Compact Disc]], com.
oreilly.hh.AlbumTrack@ba936a[track=com.oreilly.hh.Track@2421db[id=<null>,
title=In a Manner of Speaking,sourceMedia=Compact Disc]], com.oreilly.hh.
AlbumTrack@2ad974[track=com.oreilly.hh.Track@2a7640[id=<null>,title=Smile in
the Crowd,sourceMedia=Compact Disc]], com.oreilly.hh.
AlbumTrack@b9808e[track=com.oreilly.hh.Track@a721e2[id=<null>,
title=Gone,sourceMedia=Compact Disc]], com.oreilly.hh.
AlbumTrack@a1ad7d[track=com.oreilly.hh.Track@851576[id=<null>,title=Never
Turn Your Back on Mother Earth,sourceMedia=Compact Disc]], com.oreilly.hh.
AlbumTrack@442c19[track=com.oreilly.hh.Track@ab2ddb[id=<null>,
title=Motherless Child,sourceMedia=Compact Disc]]]]
With a little work, Hibernate lets you extend your typesafe enumerations to support persistence. And once you've invested that effort, you can persist them as easily as any other value type for which native support exists.
It will be interesting to see how Hibernate evolves to take advantage of the exciting enum keyword support in Java 1.5 once that's been out for a while. The need to implement PersistentEnum will probably disappear, since all real enums will already extend java.lang.Enum and will have interesting ways to obtain specific members. I hope that as Hibernate evolves to support these new first-class enumerations, it will also allow their symbolic enumeration constants to be stored in the database, rather than requiring the use of a cryptic integer column as it does today. In an ideal world, it will even be able to take advantage of the native support for enumerations provided by some databases.
If you're interested in an alternate approach to persisting typesafe enumerations that can achieve some of these goals today, read on to Chapter 7 in which the mysteries of custom type mapping are explored!
Regardless of how you implement an enumerated type, you're sometimes going to want to be able to persist such values to a database.
6.1 Defining a Persistent Enumerated Type
NOTE
C-style enumerations still appear too often in Java. Older parts of the Sun API contain many of them.
Hibernate has been around for a while and (at least as of this writing) Java 1.5 isn't yet released, so the support for enumerations in Hibernate can't take advantage of its new enum keyword. Instead, Hibernate lets you define your own typesafe enumeration classes however you like, and it provides a mechanism to help you get them into and out of a database, by translating them to and from small integer values. This is something of a regression to the world of C, but it is useful nonetheless.
In our music database, for example, we might want to add a field to our Track class that tells us the medium from which it was imported.
6.1.1 How do I do that?
The key to adding persistence support for our enumeration is to have it implement Hibernate's PersistentEnum interface. This interface has two methods, toInt() and fromInt(), that Hibernate uses to translate between the enumeration constants and values that represent them in a database.
Let's suppose we want to be able to specify whether our tracks came from cassette tapes, vinyl, VHS tapes, CDs, a broadcast, an internet download site, or a digital audio stream. (We could go really nuts and distinguish between Internet streams and satellite radio services like Sirius or XM, or radio versus television broadcast, but this is plenty to demonstrate the important ideas.)
Without any consideration of persistence, our typesafe enumeration class might look something like Example 6-1. (The JavaDoc has been compressed to take less printed space, but the downloadable version is formatted normally.)
Example 6-1. SourceMedia.java, our initial typesafe enumeration
package com.oreilly.hh;
import java.util.*;
import java.io.Serializable;
/**
* This is a typesafe enumeration that identifies the media on which an
* item in our music database was obtained.
**/
public class SourceMedia implements Serializable {
/** Stores the external name of this instance, by which it can be retrieved. */
private final String name;
/**
* Stores the human-readable description of this instance, by which it is
* identified in the user interface.
*/
private final transient String description;
/**
* Return the external name associated with this instance.
* @return the name by which this instance is identified in code.
**/
public String getName() {
return name;
}
/**
* Return the description associated with this instance.
* @return the human-readable description by which this instance is
* identified in the user interface.
**/
public String getDescription() {
return description;
}
/** Keeps track of all instances by name, for efficient lookup. */
private static final Map instancesByName = new HashMap();
/**
* Constructor is private to prevent instantiation except during class
* loading.
*
* @param name the external name of the message type.
* @param description the human readable description of the message type,
* by which it is presented in the user interface.
*/
private SourceMedia(String name, String description) {
this.name = name;
this.description = description;
// Record this instance in the collection that tracks the enumeration
instancesByName.put(name, this);
}
/** The instance that represents music obtained from cassette tape. */
public static final SourceMedia CASSETTE =
new SourceMedia("cassette", "Audio Cassette Tape");
/** The instance that represents music obtained from vinyl. */
public static final SourceMedia VINYL =
new SourceMedia("vinyl", "Vinyl Record");
/** The instance that represents music obtained from VHS tapes. */
public static final SourceMedia VHS =
new SourceMedia("vhs", "VHS Videocassette Tape");
/** The instance that represents music obtained from a compact disc. */
public static final SourceMedia CD =
new SourceMedia("cd", "Compact Disc");
/** The instance that represents music obtained from a broadcast. */
public static final SourceMedia BROADCAST =
new SourceMedia("broadcast", "Analog Broadcast");
/** The instance that represents music obtained as an Internet download. */
public static final SourceMedia DOWNLOAD =
new SourceMedia("download", "Internet Download");
/** The instance that represents music from a digital audio stream. */
public static final SourceMedia STREAM =
new SourceMedia("stream", "Digital Audio Stream");
/**
* Obtain the collection of all legal enumeration values.
* @return all instances of this typesafe enumeration.
*/
public static Collection getAllValues() {
return Collections.unmodifiableCollection(instancesByName.values());
}
/**
* Look up an instance by name.
*
* @param name the external name of an instance.
* @return the corresponding instance.
* @throws NoSuchElementException if there is no such instance.
*/
public static SourceMedia getInstanceByName(String name) {
SourceMedia result = (SourceMedia)instancesByName.get(name);
if (result == null) {
throw new NoSuchElementException(name);
}
return result;
}
/** Return a string representation of this object. */
public String toString() {
return description;
}
/** Insure that deserialization preserves the signleton property. */
private Object readResolve() {
return getInstanceByName(name);
}
}
To add persistence support for this class, all we need to do is implement the PersistentEnum interface. Unfortunately, this requires us to assign an integer value to each instance, and to provide a way of looking up instances by this integer value. This is the "regression to C" mentioned in the introduction. Most typesafe enumerations with which I've worked have not included such an integer representation, since (as in this example) it was not part of their object-oriented semantics. Still, adding this integer property is not that hard. Example 6-2 shows the revisions we need to make in bold. (To save space, unchanged members and methods and some JavaDoc are omitted from this version of the example; the downloadable version is complete.)
Example 6-2. Changes to SourceMedia.java in order to support persistence using Hibernate
package com.oreilly.hh;
import net.sf.hibernate.PersistentEnum;
import java.util.*;
import java.io.Serializable;
/**
* This is a typesafe enumeration that identifies the media on which an
* item in our music database was obtained.
**/
public class SourceMedia implements PersistentEnum, Serializable {
...
/** Stores the integer value used by Hibernate to persist this instance. */
private final int code;
...
/**
* Return the persistence code associated with this instance, as
* mandated by the {@link PersistentEnum} interface.
*/
public int toInt() {
return code;
}
...
/** Keeps track of all instances by code, for efficient lookup.
private static final Map instancesByCode = new HashMap();
/**
* Constructor is private to prevent instantiation except during class
* loading.
*
* @param name the external name of the message type.
* @param description the human readable description of the message type,
* by which it is presented in the user interface.
* @param code the persistence code by which Hibernate stores the instance.
*/
private SourceMedia(String name, String description, int code) {
this.name = name;
this.description = description;
this.code = code;
// Record this instance in the collections that track the enumeration
instancesByName.put(name, this);
instancesByCode.put(new Integer(code), this);
}
...
public static final SourceMedia CASSETTE =
new SourceMedia("cassette", "Audio Cassette Tape", 0);
...
public static final SourceMedia VINYL =
new SourceMedia("vinyl", "Vinyl Record", 1);
...
public static final SourceMedia VHS =
new SourceMedia("vhs", "VHS Videocassette Tape", 2);
...
public static final SourceMedia CD =
new SourceMedia("cd", "Compact Disc", 3);
...
public static final SourceMedia BROADCAST =
new SourceMedia("broadcast", "Analog Broadcast", 4);
...
public static final SourceMedia DOWNLOAD =
new SourceMedia("download", "Internet Download", 5);
...
public static final SourceMedia STREAM =
new SourceMedia("stream", "Digital Audio Stream", 6);
...
/**
* Look up an instance by code, as specified by the {@link PersistentEnum}
* interface.
*
* @param code the persistence code of an instance.
* @return the corresponding instance.
* @throws NoSuchElementException if there is no such instance.
*/
public static SourceMedia fromInt(int code) {
SourceMedia result =
(SourceMedia)instancesByCode.get(new Integer(code));
if (result == null) {
throw new NoSuchElementException("code=" + code);
}
return result;
}
...
}
An alternative to adding the codes to the constructor arguments is to use a static counter that gets incremented each time a new instance is constructed. Although this is more convenient and concise, it makes it much harder to tell by inspection which code goes with which instance, and it also means you need to be careful to add any new instances to the end of the construction code if you don't want existing values to be rearranged (this is a problem if you've already got values persisted in the database). These are some of the reasons it'd be nicer to avoid the numeric codes completely, and use the symbolic names to represent instances in the database.
NOTE
If you're in too much suspense, rest assured that the next chapter shows a nice way to avoid the need for such numeric codes.
The good news is that once we've got our persistent enum type defined, it's extremely easy to use it. Let's see how!
6.2 Working with Persistent Enumerations
If you were thinking about it, you may have noticed that we never defined a persistence mapping for the SourceMedia class in the first part of this chapter. That's because our persistent enumerated type is a value that gets persisted as part of one or more entities, rather than being an entity unto itself.
In that light, it's not surprising that we've not yet done any mapping. That happens when it's time to actually use the persistent enumeration.
6.2.1 How do I do that?
Recall that we wanted to keep track of the source media for the music tracks in our jukebox system. That means we want to use the SourceMedia enumeration in our Track mapping. We can simply add a new property tag to the class definition in Track.hbm.xml, as shown in Example 6-3.
Example 6-3. Adding the sourceMedia property to the Track mapping document
...
<property name="volume" type="short">
<meta attribute="field-description">How loud to play the track</meta>
</property>
<property name="sourceMedia" type="com.oreilly.hh.SourceMedia">
<meta attribute="field-description">Media on which track was obtained</meta>
<meta attribute="use-in-tostring">true</meta>
</property>
</class>
...
Because the type of our sourceMedia property names a class that implements the PersistentEnum interface, Hibernate knows to persist it using its built-in enumeration support.
With this addition in place, running ant codegen updates our Track class to include the new property. The signature of the full-blown Track constructor now looks like this:
public Track(String title, String filePath, Date playTime, Date added,
short volume, com.oreilly.hh.SourceMedia sourceMedia,
Set artists, Set comments) { ... }
We need to make corresponding changes in CreateTest.java:
Track track = new Track("Russian Trance",
"vol2/album610/track02.mp3",
Time.valueOf("00:03:30"), new Date(),
(short)0, SourceMedia.CD,
new HashSet(), new HashSet());
...
track = new Track("Video Killed the Radio Star",
"vol2/album611/track12.mp3",
Time.valueOf("00:03:49"), new Date(),
(short)0, SourceMedia.VHS,
new HashSet(), new HashSet());
And so on. To match the results shown later, mark the rest as coming from CDs, except for "The World '99" which comes from a stream and give "Test Tone 1" a null sourceMedia value. At this point, run ant schema to rebuild the database schema with support for the new property, and run ant ctest to create the sample data.
6.2.2 What just happened?
Our TRACK table now contains an integer column to store the sourceMedia property. We can see its values by looking at the contents of the table after creating the sample data (the easiest way is to run a query within ant db , as shown in Figure 6-1).
We can verify that the values persisted to the database are correct by cross-checking the codes assigned to our persistent enumeration. Alternately, we can see a more meaningful version of the information by slightly enhancing the query test to print this property for the tracks it retrieves. The necessary changes are in bold in Example 6-4.
Figure 6-1. Source media information in the TRACK table
Example 6-4. Displaying source media in QueryTest.java
...
// Print the tracks that will fit in seven minutes
List tracks = tracksNoLongerThan(Time.valueOf("00:07:00"),
session);
for (ListIterator iter = tracks.listIterator() ;
iter.hasNext() ; ) {
Track aTrack = (Track)iter.next();
String mediaInfo = "";
if (aTrack.getSourceMedia() != null) {
mediaInfo = ", from " +
aTrack.getSourceMedia().getDescription();
}
System.out.println("Track: \"" + aTrack.getTitle() + "\" " +
listArtistNames(aTrack.getArtists()) +
aTrack.getPlayTime() + mediaInfo);
With these enhancements, running ant qtest yields the output shown in Example 6-5. Tracks with non-null source media values now have "from" and the appropriate media description displayed at the end.
Example 6-5. Human-oriented display of source media information
...
qtest:
[java] Track: "Russian Trance" (PPK) 00:03:30, from Compact Disc
[java] Track: "Video Killed the Radio Star" (The Buggles) 00:03:49, from VHS
Videocassette Tape
[java] Track: "Gravity's Angel" (Laurie Anderson) 00:06:06, from Compact Disc
[java] Track: "Adagio for Strings (Ferry Corsten Remix)" (Ferry Corsten,
William Orbit, Samuel Barber) 00:06:35, from Compact Disc
[java] Track: "Test Tone 1" 00:00:10
[java] Comment: Pink noise to test equalization
Note that if we hadn't decided to do our own fancy formatting of a subset of the tracks' properties in QueryTest and instead relied on the toString() method in Track, we'd not have needed to make any changes to QueryTest to see this new information. Our mapping document specified that the sourceMedia property should be included in the toString() result, which would have taken care of it. You can inspect the generated toString() source to check this, or write a simple test program to see what the toString() output looks like. An excellent candidate would be to fix AlbumTest.java so it will compile and run after our changes to Track. The easiest fix is to simply hardcode the addAlbumTrack() method to assume everything comes from CDs, as in Example 6-5 (the JavaDoc already excuses such shameful rigidity).
Example 6-6. Fixing AlbumTest.java to support source media
/**
* Quick and dirty helper method to handle repetitive portion of creating
* album tracks. A real implementation would have much more flexibility.
*/
private static void addAlbumTrack(Album album, String title, String file,
Time length, Artist artist, int disc,
int positionOnDisc, Session session)
throws HibernateException
{
Track track = new Track(title, file, length, new Date(), (short)0,
SourceMedia.CD, new HashSet(), new HashSet());
track.getArtists().add(artist);
// session.save(track);
album.getTracks().add(new AlbumTrack(disc, positionOnDisc, track));
}
With this fix in place, running ant atest shows that the source media information propagates all the way up to Album's own toString() method:
[java] com.oreilly.hh.Album@e0f945[id=0,title=Counterfeit e.p.,
tracks=[com.oreilly.hh.AlbumTrack@1370ab[track=com.oreilly.hh.
Track@49f9fa[id=<null>,title=Compulsion,sourceMedia=Compact Disc]], com.
oreilly.hh.AlbumTrack@ba936a[track=com.oreilly.hh.Track@2421db[id=<null>,
title=In a Manner of Speaking,sourceMedia=Compact Disc]], com.oreilly.hh.
AlbumTrack@2ad974[track=com.oreilly.hh.Track@2a7640[id=<null>,title=Smile in
the Crowd,sourceMedia=Compact Disc]], com.oreilly.hh.
AlbumTrack@b9808e[track=com.oreilly.hh.Track@a721e2[id=<null>,
title=Gone,sourceMedia=Compact Disc]], com.oreilly.hh.
AlbumTrack@a1ad7d[track=com.oreilly.hh.Track@851576[id=<null>,title=Never
Turn Your Back on Mother Earth,sourceMedia=Compact Disc]], com.oreilly.hh.
AlbumTrack@442c19[track=com.oreilly.hh.Track@ab2ddb[id=<null>,
title=Motherless Child,sourceMedia=Compact Disc]]]]
With a little work, Hibernate lets you extend your typesafe enumerations to support persistence. And once you've invested that effort, you can persist them as easily as any other value type for which native support exists.
It will be interesting to see how Hibernate evolves to take advantage of the exciting enum keyword support in Java 1.5 once that's been out for a while. The need to implement PersistentEnum will probably disappear, since all real enums will already extend java.lang.Enum and will have interesting ways to obtain specific members. I hope that as Hibernate evolves to support these new first-class enumerations, it will also allow their symbolic enumeration constants to be stored in the database, rather than requiring the use of a cryptic integer column as it does today. In an ideal world, it will even be able to take advantage of the native support for enumerations provided by some databases.
If you're interested in an alternate approach to persisting typesafe enumerations that can achieve some of these goals today, read on to Chapter 7 in which the mysteries of custom type mapping are explored!
Introduction to Mapping
For information we need to keep around between runs, or share between different programs and systems, relational databases have proven to be hard to beat. They're scalable, reliable, efficient, and extremely flexible. So what we need is a means of taking information from a SQL database and turning it into Java objects, and vice versa.
There are many different ways of doing this, ranging from completely manual database design and coding, to highly automated tools. The general problem is known as Object/Relational Mapping, and Hibernate is a lightweight O/R mapping service for Java.
The 'lightweight' designation means it is designed to be fairly simple to learn and use, and to place reasonable demands on system resources, compared to some of the other available tools. Despite this, it manages to be broadly useful and deep. The designers have done a good job of figuring out the kinds of things that real projects need to accomplish, and supporting them well.
You can use Hibernate in many different ways, depending on what you're starting with. If you've got a database that you need to interact with, there are tools that can analyze the existing schema as a starting point for your mapping, and help you write the Java classes to represent the data. If you've got classes that you want to store in a new database, you can start with the classes, get help building a mapping document, and generate an initial database schema. We'll look at some of these approaches later.
For now, we're going to see how you can start a brand new project, with no existing classes or data, and have Hibernate help you build both. When starting from scratch like this, the most convenient place to begin is in the middle, with an abstract definition of the mapping we're going to make between program objects and the database tables that will store them.
2.1 Writing a Mapping Document
Hibernate uses an XML document to track the mapping between Java classes and relational database tables. This mapping document is designed to be readable and hand-editable. You can also start by using graphical CASE tools (like Together, Rose, or Poseidon) to build UML diagrams representing your data model, and feed these into AndroMDA ( www.andromda.org/ ), turning them into Hibernate mappings.
NOTE
Don't forget that Hibernate and its extensions let you work in other ways, starting with classes or data if you've got them.
We'll write one by hand, showing it's quite practical.
We're going to start by writing a mapping document for tracks, pieces of music that can be listened to individually or as part of an album or play list. To begin with, we'll keep track of the track's title, the path to the file containing the actual music, its playing time, the date on which it was added to the database, and the volume at which it should be played (in case the default volume isn't appropriate because it was recorded at a very different level than other music in the database).
2.1.1 Why do I care?
You might not have any need for a new system to keep track of your music, but the concepts and process involved in setting up this mapping will translate to the projects you actually want to tackle.
2.1.2 How do I do that?
Fire up your favorite text editor, and create the file Track.hbm.xml in the src/com/oreilly/hh directory you set up in the previous Chapter. (If you skipped that chapter, you'll need to go back and follow it, because this example relies on the project structure and tools we set up there.) Type in the mapping document as shown in Example 2-1. Or, if you'd rather avoid all that typing, download the code examples from this book's web site, and find the mapping file in the directory for Chapter 2.
Example 2-1. The mapping document for tracks, Track.hbm.xml
1 <?xml version="1.0"?>
2 <!DOCTYPE hibernate-mapping
3 PUBLIC "-//Hibernate/Hibernate Mapping DTD 2.0//EN"
4 "http://hibernate.sourceforge.net/hibernate-mapping-2.0.dtd">
5 <hibernate-mapping>
6
7 <class name="com.oreilly.hh.Track" table="TRACK">
8 <meta attribute="class-description">
9 Represents a single playable track in the music database.
10 @author Jim Elliott (with help from Hibernate)
11 </meta>
12
13 <id name="id" type="int" column="TRACK_ID">
14 <meta attribute="scope-set">protected</meta>
15 <generator class="native"/>
16 </id>
17
18 <property name="title" type="string" not-null="true"/>
19
20 <property name="filePath" type="string" not-null="true"/>
21
22 <property name="playTime" type="time">
23 <meta attribute="field-description">Playing time</meta>
24 </property>
25
26 <property name="added" type="date">
27 <meta attribute="field-description">When the track was created</meta>
28 </property>
29
30 <property name="volume" type="short">
31 <meta attribute="field-description">How loud to play the track</meta>
32 </property>
33
34 </class>
35 </hibernate-mapping>
The first four lines are a required preamble to make this a valid XML document and announce that it conforms to the document type definition used by Hibernate for mappings. The actual mappings are inside the hibernate-mapping tag. Starting at line 7 we're defining a mapping for a single class, com.oreilly.hh.Track, and the name and package of this class are related to the name and location of the file we've created. This relationship isn't necessary; you can define mappings for any number of classes in a single mapping document, and name it and locate it anywhere you want, as long as you tell Hibernate how to find it. The advantage of following the convention of naming the mapping file after the class it maps, and placing it in the same place on the class path as that class, is that this allows Hibernate to automatically locate the mapping when you want to work with the class. This simplifies the configuration and use of Hibernate.
In the opening of the class tag on line 7, we have also specified that this class is stored in a database table named TRACK. The next tag, a meta tag (lines 8-11), doesn't directly affect the mapping. Instead, it provides additional information that can be used by different tools. In this case, by specifying an attribute value of 'class-description,' we are telling the Java code generation tool the JavaDoc text we want associated with the Track class. This is entirely optional, and you'll see the result of including it in the upcoming section, 'Generating Some Class.'
Although databases vary in terms of whether they keep track of the capitalization of table and column names, this book will use the convention of referring to these database entities in all-caps, to help clarify when something being discussed is a database column or table, as opposed to a persistent Java class or property.
The remainder of the mapping sets up the pieces of information we want to keep track of, as properties in the class and their associated columns in the database table. Even though we didn't mention it in the introduction to this example, each track is going to need an id. Following database best practices, we'll use a meaningless surrogate key (a value with no semantic meaning, serving only to identify a specific database row). In Hibernate, the key/id mapping is set up using an id tag (starting at line 13). We're choosing to use an int to store our id in the database column TRACK_ID, which will correspond to the property id in our Track object. This mapping contains another meta tag to communicate with the Java code generator, telling it that the set method for the id property should be protected—there's no need for application code to go changing track IDs.
The generator tag on line 15 configures how Hibernate creates id values for new instances. (Note that it relates to normal O/R mapping operation, not to the Java code generator, which is often not even used; generator is more fundamental than the optional meta tags.) There are a number of different ID generation strategies to choose from, and you can even write your own. In this case, we're telling Hibernate to use whatever is most natural for the underlying database (we'll see later on how it learns what database we're using). In the case of HSQLDB, an identity column is used.
After the id, we just enumerate the various track properties we care about. The title (line 18) is a string, and it cannot be null. The filePath (line 20) has the same characteristics, while the remainder are allowed to be null: playTime (line 22) is a time, added (line 26) is a date, and volume (line 30) is a short. These last three properties use a new kind of meta attribute, 'field-description,' which specifies JavaDoc text for the individual properties, with some limitations in the current code generator.
NOTE
You may be thinking there's a lot of dense information in this file. That's true, and as you'll see, it can be used to create a bunch of useful project resources.
2.1.3 What just happened?
We took the abstract description of the information about music tracks that we wanted to represent in our Java code and database, and turned it into a rigorous specification in the format that Hibernate can read. Hopefully you'll agree that it's a pretty compact and readable representation of the information. Next we'll look at what Hibernate can actually do with it.
2.2 Generating Some Class
Our mapping contains information about both the database and the Java class between which it maps. We can use it to help us create both. Let's look at the class first.
Example 2-2. The Ant build file updated for code generation
1 <project name="Harnessing Hibernate: The Developer's Notebook"
2 default="db" basedir=".">
3 <!-- Set up properties containing important project directories -->
4 <property name="source.root" value="src"/>
5 <property name="class.root" value="classes"/>
6 <property name="lib.dir" value="lib"/>
7 <property name="data.dir" value="data"/>
8
9 <!-- Set up the class path for compilation and execution -->
10 <path id="project.class.path">
11 <!-- Include our own classes, of course -->
12 <pathelement location="${class.root}" />
13 <!-- Include jars in the project library directory -->
14 <fileset dir="${lib.dir}">
15 <include name="*.jar"/>
16 </fileset>
17 </path>
18
19 <target name="db" description="Runs HSQLDB database management UI
20 against the database file--use when application is not running">
21 <java classname="org.hsqldb.util.DatabaseManager"
22 fork="yes">
23 <classpath refid="project.class.path"/>
24 <arg value="-driver"/>
25 <arg value="org.hsqldb.jdbcDriver"/>
26 <arg value="-url"/>
27 <arg value="jdbc:hsqldb:${data.dir}/music"/>
28 <arg value="-user"/>
29 <arg value="sa"/>
30 </java>
31 </target>
32
33 <!-- Teach Ant how to use Hibernate's code generation tool -->
34 <taskdef name="hbm2java"
35 classname="net.sf.hibernate.tool.hbm2java.Hbm2JavaTask"
36 classpathref="project.class.path"/>
37
38 <!-- Generate the java code for all mapping files in our source tree -->
39 <target name="codegen"
40 description="Generate Java source from the O/R mapping files">
41 <hbm2java output="${source.root}">
42 <fileset dir="${source.root}">
43 <include name="**/*.hbm.xml"/>
44 </fileset>
45 </hbm2java>
46 </target>
47
48 </project>
We added a taskdef (task definition) and a new target to the build file. The task definition at line 33 teaches Ant a new trick: it tells Ant how to use the hbm2java tool that is part of the Hibernate Extensions, with the help of a class provided for this purpose. Note that it also specifies the class path to be used when invoking this tool, using the project.class.path definition found earlier in the file.
The codegen target at line 38 uses the new hbm2java task to run Hibernate's code generator on any mapping documents found in the src tree, writing the corresponding Java source. The pattern '**/*.hbm.xml' means 'any file ending in .hbm.xml, within the specified directory, or any subdirectory, however deeply nested.'
Let's try it! From within your top-level project directory (the folder containing build.xml), type the following command:
ant codegen
You should see output like this:
Buildfile: build.xml
codegen:
[hbm2java] Processing 1 files.
[hbm2java] Building hibernate objects
[hbm2java] log4j:WARN No appenders could be found for logger (net.sf.
hibernate.util.DTDEntityResolver).
[hbm2java] log4j:WARN Please initialize the log4j system properly.
The warnings are griping about the fact that we haven't taken the trouble to set up the logging environment that Hibernate expects. We'll see how to do that in the next example. For now, if you look in the directory src/com/oreilly/hh, you'll see that a new file named Track.java has appeared, with the content shown in Example 2-3.
Example 2-3. Code generated from the Track mapping document
1 package com.oreilly.hh;
2
3 import java.io.Serializable;
4 import java.util.Date;
5 import org.apache.commons.lang.builder.EqualsBuilder;
6 import org.apache.commons.lang.builder.HashCodeBuilder;
7 import org.apache.commons.lang.builder.ToStringBuilder;
8
9 /**
10 * Represents a single playable track in the music database.
11 * @author Jim Elliott (with help from Hibernate)
12 *
13 */
14 public class Track implements Serializable {
15
16 /** identifier field */
17 private Integer id;
18
19 /** persistent field */
20 private String title;
21
22 /** persistent field */
23 private String filePath;
24
25 /** nullable persistent field */
26 private Date playTime;
27
28 /** nullable persistent field */
29 private Date added;
30
31 /** nullable persistent field */
32 private short volume;
33
34 /** full constructor */
35 public Track(String title, String filePath, Date playTime,
Date added, short volume) {
36 this.title = title;
37 this.filePath = filePath;
38 this.playTime = playTime;
39 this.added = added;
40 this.volume = volume;
41 }
42
43 /** default constructor */
44 public Track() {
45 }
46
47 /** minimal constructor */
48 public Track(String title, String filePath) {
49 this.title = title;
50 this.filePath = filePath;
51 }
52
53 public Integer getId() {
54 return this.id;
55 }
56
57 protected void setId(Integer id) {
58 this.id = id;
59 }
60
61 public String getTitle() {
62 return this.title;
63 }
64
65 public void setTitle(String title) {
66 this.title = title;
67 }
68
69 public String getFilePath() {
70 return this.filePath;
71 }
72
73 public void setFilePath(String filePath) {
74 this.filePath = filePath;
75 }
76
77 /**
78 * Playing time
79 */
80 public Date getPlayTime() {
81 return this.playTime;
82 }
83
84 public void setPlayTime(Date playTime) {
85 this.playTime = playTime;
86 }
87
88 /**
89 * When the track was created
90 */
91 public Date getAdded() {
92 return this.added;
93 }
94
95 public void setAdded(Date added) {
96 this.added = added;
97 }
98
99 /**
100 * How loud to play the track
101 */
102 public short getVolume() {
103 return this.volume;
104 }
105
106 public void setVolume(short volume) {
107 this.volume = volume;
108 }
109
110 public String toString() {
111 return new ToStringBuilder(this)
112 .append("id", getId())
113 .toString();
114 }
115
116 public boolean equals(Object other) {
117 if ( !(other instanceof Track) ) return false;
118 Track castOther = (Track) other;
119 return new EqualsBuilder()
120 .append(this.getId(), castOther.getId())
121 .isEquals();
122 }
123
124 public int hashCode() {
125 return new HashCodeBuilder()
126 .append(getId())
127 .toHashCode();
128 }
129
130 }
2.2.2 What just happened?
Ant found all files in our source tree ending in .hbm.xml (just one, so far) and fed it to the Hibernate code generator, which analyzed it, and wrote a Java class meeting the specifications we provided for the Track mapping.
NOTE
That can save a lot of time and fairly repetitive activity. I could get used to it.
You may find it worthwhile to compare the generated Java source with the mapping specification from which it arose (Example 2-1). The source starts out with the proper package declaration, which is easy for hbm2java to figure out from the fully qualified class name required in the mapping file. There are a couple of imports to make the source more readable. The three potentially unfamiliar entries (lines 5-7) are utilities from the Jakarta Commons project that help in the creation of correctly implemented and useful toString(), equals(), and hashCode() methods.
The class-level JavaDoc at line 10 should look familiar, since it comes right from the 'class-description' meta tag in our mapping document. The field declarations are derived from the id (line 17) and property (lines 20-32) tags defined in the mapping. The Java types used are derived from the property types in the mapping document. We'll delve into the full set of value types supported by Hibernate later on. For now, the relationship between the types in the mapping document and the Java types used in the generated code should be fairly clear.
One curious detail is that an Integer wrapper has been used for id, while volume is declared as a simple, unwrapped short. Why the difference? It relates to the fact that the ID/key property has many important roles to play in the O/R mapping process (which is why it gets a special XML tag in the mapping document, rather than being just another property). Although we left it out in our specification, one of the choices you need to make when setting up an ID is to pick a special value to indicate that a particular instance has not yet been saved into the database. Leaving out this unsaved-value attribute, as we did, tells Hibernate to use its default interpretation, which is that unsaved values are indicated by an ID of null. Since native int values can't be null, they must be wrapped in a java.lang.Integer, and Hibernate took care of this for us.
When it comes to the volume property, Hibernate has no special need or use for it, so it trusts us to know what we're doing. If we want to be able to store null values for volume, perhaps to indicate 'no change,' we need to explicitly use java.lang.Short rather than short in our mapping document. (Had we not been sneakily pointing out this difference, our example would be better off explicitly using java.lang.Integer in our ID mapping too, just for clarity.)
NOTE
I know, I'm a perfectionist. I only bother to pick nits because I think Hibernate is so useful!
Another thing you might notice about these field declarations is that their JavaDoc is quite generic—you may be wondering what happened to the 'field-description' meta tags we put in the mapping document for playTime, added and volume. It turns out they appear only later, in the JavaDoc for the getter methods. They are not used in the setters, the actual field declarations, nor as @param entries for the constructor. As an avid user of a code-completing Java editor, I count on pop-up JavaDoc as I fill in arguments to method calls, so I'm a little disappointed by this limitation. Of course, since this is an open source project, any of us can get involved and propose or undertake this simple fix. Indeed, you may find this already remedied by the time you read this book. Once robust field and parameter documentation is in place, I'd definitely advocate always providing a brief but accurate field-description entry for your properties.
After the field declarations come a trio of constructors. The first (line 35) establishes values for all properties, the second (line 44) allows instantiation without any arguments (this is required if you want the class to be usable as a bean, such as on a Java Server Page, a very common use for data classes like this), and the last (line 48) fills in just the values we've indicated must not be null. Notice that none of the constructors set the value of id; this is the responsibility of Hibernate when we get the object out of the database, or insert it for the first time.
Consistent with that, the setId() method on line 57 is protected, as requested in our id mapping. The rest of the getters and setters are not surprising; this is all pretty much boilerplate code (which we've all written too many times), which is why it's so nice to be able to have the Hibernate extensions generate it for us.
If you want to use Hibernate's generated code as a starting point and then add some business logic or other features to the generated class, be aware that all your changes will be silently discarded the next time you run the code generator. In such a project you will want to be sure the hand-tweaked classes are not regenerated by any Ant build target.
Even though we're having Hibernate generate our data classes in this example, it's important to point out that the getters and setters it creates are more than a nice touch. You need to put these in your persistent classes for any properties you want to persist, since Hibernate's fundamental persistence architecture is based on reflective access to Java- Beans™-style properties. They don't need to be public if you don't want them to; Hibernate has ways of getting at even properties declared protected or private, but they do need accessor methods. Think of it as enforcing good object design; the Hibernate team wants to keep the implementation details of actual instance variables cleanly separated from the persistence mechanism.
2.3 Cooking Up a Schema
That was pretty easy, wasn't it? You'll be happy to learn that creating database tables is a very similar process. As with code generation, you've already done most of the work in coming up with the mapping document. All that's left is to set up and run the schema generation tool.
Example 2-4. Setting up hibernate.properties
hibernate.dialect=net.sf.hibernate.dialect.HSQLDialect
hibernate.connection.driver_class=org.hsqldb.jdbcDriver
hibernate.connection.url=jdbc:hsqldb:data/music
hibernate.connection.username=sa
hibernate.connection.password=
In addition to establishing the SQL dialect we are using, this tells Hibernate how to establish a connection to the database using the JDBC driver that ships as part of the HSQLDB database JAR archive, and that the data should live in the data directory we've created—in the database named music. The username and empty password (indeed, all these values) should be familiar from the experiment we ran at the end of Chapter 1.
Notice that we're using a relative path to specify the database filename. This works fine in our examples—we're using ant to control the working directory. If you copy this for use in a web application or other environment, though, you'll likely need to be more explicit about the location of the file.
You can put the properties file in other places, and give it other names, or use entirely different ways of getting the properties into Hibernate, but this is the default place it will look, so it's the path of least resistance (or, I guess, least runtime configuration).
We also need to add some new pieces to our build file, shown in Example 2-5. This is a somewhat substantial addition, because we need to compile our Java source in order to use the schema generation tool, which relies on reflection to get its details right. Add these targets right before the closing </project> tag at the end of build.xml.
Example 2-5. Ant build file additions for compilation and schema generation
1 <!-- Create our runtime subdirectories and copy resources into them -->
2 <target name="prepare" description="Sets up build structures">
3 <mkdir dir="${class.root}"/>
4
5 <!-- Copy our property files and O/R mappings for use at runtime -->
6 <copy todir="${class.root}" >
7 <fileset dir="${source.root}" >
8 <include name="**/*.properties"/>
9 <include name="**/*.hbm.xml"/>
10 </fileset>
11 </copy>
12 </target>
13
14 <!-- Compile the java source of the project -->
15 <target name="compile" depends="prepare"
16 description="Compiles all Java classes">
17 <javac srcdir="${source.root}"
18 destdir="${class.root}"
19 debug="on"
20 optimize="off"
21 deprecation="on">
22 <classpath refid="project.class.path"/>
23 </javac>
24 </target>
25
26 <!-- Generate the schemas for all mapping files in our class tree -->
27 <target name="schema" depends="compile"
28 description="Generate DB schema from the O/R mapping files">
29
30 <!-- Teach Ant how to use Hibernate's schema generation tool -->
31 <taskdef name="schemaexport"
32 classname="net.sf.hibernate.tool.hbm2ddl.SchemaExportTask"
33 classpathref="project.class.path"/>
34
35 <schemaexport properties="${class.root}/hibernate.properties"
36 quiet="no" text="no" drop="no" delimiter=";">
37 <fileset dir="${class.root}">
38 <include name="**/*.hbm.xml"/>
39 </fileset>
40 </schemaexport>
41 </target>
First we add a prepare target that is intended to be used by other targets more than from the command line. Its purpose is to create, if necessary, the classes directory into which we're going to compile, and then copy any properties and mapping files found in the src directory hierarchy to corresponding directories in the classes hierarchy. This hierarchical copy operation (using the special '**/*' pattern) is a nice feature of Ant, enabling us to define and edit resources alongside to the source files that use them, while making those resources available at runtime via the class loader.
The aptly named compile target at line 14 uses the built-in java task to compile all the Java source files found in the src tree to the classes tree. Happily, this task also supports the project class path we've set up, so the compiler can find all the libraries we're using. The depends="prepare" attribute in the target definition tells Ant that before running the compile target, prepare must be run. Ant manages dependencies so that when you're building multiple targets with related dependencies, they are executed in the right order, and each dependency gets executed only once, even if it is mentioned by multiple targets.
If you're accustomed to using shell scripts to compile a lot of Java source, you'll be surprised by how quickly the compilation happens. Ant invokes the Java compiler within the same virtual machine that it is using, so there is no process startup delay for each compilation.
Finally, after all this groundwork, we can write the target we really wanted to! The schema target (line 26) depends on compile, so all our Java classes will be compiled and available for inspection when the schema generator runs. It uses taskdef internally at line 31 to define the schemaexport task that runs the Hibernate schema export tool, in the same way we provided access to the code generation tool at the top of the file. It then invokes this tool and tells it to generate the database schema associated with any mapping documents found in the classes tree.
There are a number of parameters you can give the schema export tool to configure the way it works. In this example (at line 35) we're telling it to display the SQL it runs so we can watch what it's doing (quiet="no"), to actually interact with the database and create the schema rather than simply writing out a DDL file we could import later or simply deleting the schema (text="no", drop="no"). For more details about these and other configuration options, consult the Hibernate reference manual.
You may be wondering why the taskdef for the schema update tool is inside our schema target, rather than at the top of the build file, next to the one for hbm2java. Well, I wanted it up there too, but I ran into a snag that's worth explaining. I got strange error messages the first time I tried to build the schema target, complaining there was no hibernate.properties on the class path and our compiled Track class couldn't be found. When I ran it again, it worked. Some detective work using ant -verbose revealed that if the classes directory didn't exist when the taskdef was encountered, Ant helpfully removed it from the class path. Since a taskdef can't have its own dependencies, the solution is to move it into the schema target, giving it the benefit of that target's dependencies, ensuring the classes directory exists by the time the taskdef is processed.
With these additions, we're ready to generate the schema for our TRACK table.
You might think the drop="no" setting in our schema task means you can use it to update the schema—it won't drop the tables, right? Alas, this is a misleading parameter name: it means it won't just drop the tables, rather it will go ahead and generate the schema after dropping them. Much as you want to avoid the codegen task after making any changes to the generated Java source, you mustn't export the schema if you've put any data into the database. Luckily, there is another tool you can use for incremental schema updates that works much the same way, as long as your JDBC driver is powerful enough. This SchemaUpdate tool can be used with an Ant taskdef too.
Because we've asked the schema export task not to be 'quiet,' we want it to generate some log entries for us. In order for that to work, we need to configure log4j, the logging environment used by Hibernate. The easiest way to do this is to make a log4j.properties file available at the root of the class path. We can take advantage of our existing prepare target to copy this from the src to the classes directory at the same time it copies Hibernate's properties. Create a file named log4j.properties in the src directory with the content shown in Example 2-6. An easy way to do this is to copy the file out of the src directory in the Hibernate distribution you downloaded, since it's provided for use by their own examples. If you're typing it in yourself, you can skip the blocks that are commented out; they are provided to suggest useful logging alternatives.
Example 2-6. The logging configuration file, log4j.properties
### direct log messages to stdout ###
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.Target=System.out
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%d{ABSOLUTE} %5p %c{1}:%L - %m%n
### direct messages to file hibernate.log ###
#log4j.appender.file=org.apache.log4j.FileAppender
#log4j.appender.file.File=hibernate.log
#log4j.appender.file.layout=org.apache.log4j.PatternLayout
#log4j.appender.file.layout.ConversionPattern=%d{ABSOLUTE} %5p %c{1}:%L - %m%n
### set log levels - for more verbose logging change 'info' to 'debug' ###
log4j.rootLogger=warn, stdout
log4j.logger.net.sf.hibernate=info
### log just the SQL
#log4j.logger.net.sf.hibernate.SQL=debug
### log JDBC bind parameters ###
log4j.logger.net.sf.hibernate.type=info
### log schema export/update ###
log4j.logger.net.sf.hibernate.tool.hbm2ddl=debug
### log cache activity ###
#log4j.logger.net.sf.hibernate.cache=debug
### enable the following line if you want to track down connection ###
### leakages when using DriverManagerConnectionProvider ###
#log4j.logger.net.sf.hibernate.connection.DriverManagerConnectionProvider=trace
With the log configuration in place, you might want to edit the codegen target in build.xml so that it, too, depends on our new prepare target. This will ensure logging is configured whenever we use it, preventing the warnings we saw when first running it. As noted in the tip about class paths and task definitions in the previous section, though, to make it work the very first time you'll have to move the taskdef for hbm2java inside the codegen target, in the same way we put schemaexport inside the schema target.
Time to make a schema! From the project directory, execute the command ant schema . You'll see output similar to Example 2-7 as the classes directory is created and populated with resources, the Java source is compiled,[2.1] and the schema generator is run.
[2.1] We're assuming you've already generated the code shown in Example 2-3, or there won't be any Java source to compile, and the schema generation will fail. The schema target doesn't invoke codegen to automatically generate code, in case you've manually extended any of your generated classes.
Example 2-7. Output from building the schema using HSQLDB's embedded database server
% ant schema
Buildfile: build.xml
prepare:
[mkdir] Created dir: /Users/jim/Documents/Work/OReilly/Hibernate/Examples/
ch02/classes
[copy] Copying 3 files to /Users/jim/Documents/Work/OReilly/Hibernate/
Examples/ch02/classes
compile:
[javac] Compiling 1 source file to /Users/jim/Documents/Work/OReilly/
Hibernate/Examples/ch02/classes
schema:
[schemaexport] 23:50:36,165 INFO Environment:432 - Hibernate 2.1.1
[schemaexport] 23:50:36,202 INFO Environment:466 - loaded properties from
resource hibernate.properties: {hibernate.connection.username=sa, hibernate.
connection.password=, hibernate.cglib.use_reflection_optimizer=true, hibernate.
dialect=net.sf.hibernate.dialect.HSQLDialect, hibernate.connection.url=jdbc:
hsqldb:data/music, hibernate.connection.driver_class=org.hsqldb.jdbcDriver}
[schemaexport] 23:50:36,310 INFO Environment:481 - using CGLIB reflection
optimizer
[schemaexport] 23:50:36,384 INFO Configuration:166 - Mapping file: /Users/jim/
Documents/Work/OReilly/Hibernate/Examples/ch02/classes/com/oreilly/hh/Track.hbm.
xml
[schemaexport] 23:50:37,409 INFO Binder:225 - Mapping class: com.oreilly.hh.
Track -> TRACK
[schemaexport] 23:50:37,928 INFO Dialect:82 - Using dialect: net.sf.hibernate.
dialect.HSQLDialect
[schemaexport] 23:50:37,942 INFO Configuration:584 - processing one-to-many
association mappings
[schemaexport] 23:50:37,947 INFO Configuration:593 - processing one-to-one
association property references
[schemaexport] 23:50:37,956 INFO Configuration:618 - processing foreign key
constraints
[schemaexport] 23:50:38,113 INFO Configuration:584 - processing one-to-many
association mappings
[schemaexport] 23:50:38,124 INFO Configuration:593 - processing one-to-one
association property references
[schemaexport] 23:50:38,132 INFO Configuration:618 - processing foreign key
constraints
[schemaexport] 23:50:38,149 INFO SchemaExport:98 - Running hbm2ddl schema export
[schemaexport] 23:50:38,154 INFO SchemaExport:117 - exporting generated schema
to database
[schemaexport] 23:50:38,232 INFO DriverManagerConnectionProvider:41 - Using
Hibernate built-in connection pool (not for production use!)
[schemaexport] 23:50:38,238 INFO DriverManagerConnectionProvider:42 - Hibernate
connection pool size: 20
[schemaexport] 23:50:38,278 INFO DriverManagerConnectionProvider:71 - using
driver: org.hsqldb.jdbcDriver at URL: jdbc:hsqldb:data/music
[schemaexport] 23:50:38,283 INFO DriverManagerConnectionProvider:72 -connection
properties: {user=sa, password=}
[schemaexport] drop table TRACK if exists
[schemaexport] 23:50:39,083 DEBUG SchemaExport:132 - drop table TRACK if exists
[schemaexport] create table TRACK (
[schemaexport] TRACK_ID INTEGER NOT NULL IDENTITY,
[schemaexport] title VARCHAR(255) not null,
[schemaexport] filePath VARCHAR(255) not null,
[schemaexport] playTime TIME,
[schemaexport] added DATE,
[schemaexport] volume SMALLINT
[schemaexport] )
[schemaexport] 23:50:39,113 DEBUG SchemaExport:149 - create table TRACK (
[schemaexport] TRACK_ID INTEGER NOT NULL IDENTITY,
[schemaexport] title VARCHAR(255) not null,
[schemaexport] filePath VARCHAR(255) not null,
[schemaexport] playTime TIME,
[schemaexport] added DATE,
[schemaexport] volume SMALLINT
[schemaexport] )
[schemaexport] 23:50:39,142 INFO SchemaExport:160 - schema export complete
[schemaexport] 23:50:39,178 INFO DriverManagerConnectionProvider:137 - cleaning
up connection pool: jdbc:hsqldb:data/music
BUILD SUCCESSFUL
Total time: 10 seconds
Toward the end of the schemaexport section you can see the actual SQL used by Hibernate to create the TRACK table. If you look at the start of the music.script file in the data directory, you'll see it's been incorporated into the database. For a slightly more friendly (and perhaps convincing) way to see it, execute ant db to fire up the HSQLDB graphical interface, as shown in Figure 2-1.
Figure 2-1. The database interface with our new TRACK table expanded, and a query
2.3.2 What just happened?
We were able to use Hibernate to create a data table in which we can persist instances of the Java class it created for us. We didn't have to type a single line of SQL or Java! Of course, our table is still empty at this point. Let's change that! The next chapter will look at the stuff you probably most want to see: using Hibernate from within a Java program to turn objects into database entries and vice versa.
NOTE
It's about time? Yeah, I suppose. But at least you didn't have to figure out all these steps from scratch!
Before diving into that cool task, it's worth taking a moment to reflect on how much we've been able to accomplish with a couple of XML and properties files. Hopefully you're starting to see the power and convenience that make Hibernate so exciting.
2.4 Connecting Hibernate to MySQL
Example 2-8. Setting up the MySQL database notebook_db as a Hibernate playground
% mysql -u root -p
Enter password:
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 764 to server version: 3.23.44-Max-log
Type 'help;' or '\h' for help. Type '\c' to clear the buffer.
mysql> CREATE DATABASE notebook_db;
Query OK, 1 row affected (0.00 sec)
mysql> GRANT ALL ON notebook_db.* TO jim IDENTIFIED BY "s3cret";
Query OK, 0 rows affected (0.20 sec)
mysql> quit;
Bye
NOTE
Hopefully you'll use a less guessable password than this in your real databases!
Make a note of the database name you create, as well as the username and password that can access to it. These will need to be entered into hibernate.properties, as shown in Example 2-9.
Next, you'll need a JDBC driver capable of connecting to MySQL. If you're already using MySQL for your Java projects, you'll have one. Otherwise, you can download Connector/J from www.mysql.com/downloads/api-jdbc-stable.html . However you obtain it, copy the driver library jar (which will be named something like mysql-connector-java-3.0.10-stable-bin.jar) to your project's lib directory alongside the HSQLDB, Hibernate, and other libraries that are already there. It's fine to have drivers for several different databases available to your code; they won't conflict with each other, since the configuration file specifies which driver class to use.
Speaking of which, it's time to edit hibernate.properties to use the new driver and database we've just made available. Example 2-9 shows how it is set up to connect to my MySQL instance using the database created in Example 2-8. You'll need to tweak these values to correspond to your own server, database, and the login credentials you chose. (If you're using MM.MySQL, the older incarnation of the MySQL JDBC driver, the driver_class will need to be com.mysql.jdbc.Driver.)
Example 2-9. Changes to hibernate.properties to connect to the new MySQL database
hibernate.dialect=net.sf.hibernate.dialect.MySQLDialect
hibernate.connection.driver_class=com.mysql.jdbc.Driver
hibernate.connection.url=jdbc:mysql://slant.reseune.pvt/notebook_db
hibernate.connection.username=jim
hibernate.connection.password=s3cret
The URL on the third line will need to reflect your server; you won't be able to resolve my private internal domain name, let alone route to it.
Once this is all set, you can rerun the schema creation example that was set up in the previous section. This time it will build the schema on your MySQL server rather than in the embedded HSQLDB world. You'll see output like that in Example 2-10.
Example 2-10. Schema creation when connecting to MySQL
% ant schema
Buildfile: build.xml
prepare:
compile:
schema:
[schemaexport] 23:02:13,614 INFO Environment:462 - Hibernate 2.1.2
[schemaexport] 23:02:13,659 INFO Environment:496 - loaded properties from
resource hibernate.properties: {hibernate.connection.username=jim, hibernate.
connection.password=s3cret, hibernate.cglib.use_reflection_optimizer=true,
hibernate.dialect=net.sf.hibernate.dialect.MySQLDialect, hibernate.connection.
url=jdbc:mysql://slant.reseune.pvt/notebook_db, hibernate.connection.driver_
class=com.mysql.jdbc.Driver}
[schemaexport] 23:02:13,711 INFO Environment:519 - using CGLIB reflection
optimizer
[schemaexport] 23:02:13,819 INFO Configuration:166 - Mapping file: /Users/jim/
Documents/Work/OReilly/Hibernate/Examples/ch02/classes/com/oreilly/hh/Track.hbm.xml
[schemaexport] 23:02:15,568 INFO Binder:229 - Mapping class: com.oreilly.hh.
Track -> TRACK
[schemaexport] 23:02:16,164 INFO Dialect:82 - Using dialect: net.sf.hibernate.
dialect.MySQLDialect
[schemaexport] 23:02:16,175 INFO Configuration:595 - processing one-to-many
association mappings
[schemaexport] 23:02:16,188 INFO Configuration:604 - processing one-to-one
association property references
[schemaexport] 23:02:16,209 INFO Configuration:629 - processing foreign key
constraints
[schemaexport] 23:02:16,429 INFO Configuration:595 - processing one-to-many
association mappings
[schemaexport] 23:02:16,436 INFO Configuration:604 - processing one-to-one
association property references
[schemaexport] 23:02:16,440 INFO Configuration:629 - processing foreign key
constraints
[schemaexport] 23:02:16,470 INFO SchemaExport:98 - Running hbm2ddl schema export
[schemaexport] 23:02:16,488 INFO SchemaExport:117 - exporting generated schema
to database
[schemaexport] 23:02:16,543 INFO DriverManagerConnectionProvider:41 - Using
Hibernate built-in connection pool (not for production use!)
[schemaexport] 23:02:16,549 INFO DriverManagerConnectionProvider:42 - Hibernate
connection pool size: 20
[schemaexport] 23:02:16,583 INFO DriverManagerConnectionProvider:71 - using
driver: com.mysql.jdbc.Driver at URL: jdbc:mysql://slant.reseune.pvt/notebook_db
[schemaexport] 23:02:16,597 INFO DriverManagerConnectionProvider:72 -connection
properties: {user=jim, password=s3cret}
[schemaexport] drop table if exists TRACK
[schemaexport] 23:02:18,129 DEBUG SchemaExport:132 - drop table if exists TRACK
[schemaexport] create table TRACK (
[schemaexport] TRACK_ID INTEGER NOT NULL AUTO_INCREMENT,
[schemaexport] title VARCHAR(255) not null,
[schemaexport] filePath VARCHAR(255) not null,
[schemaexport] playTime TIME,
[schemaexport] added DATE,
[schemaexport] volume SMALLINT,
[schemaexport] primary key (Track_id)
[schemaexport] )
[schemaexport] 23:02:18,181 DEBUG SchemaExport:149 - create table TRACK (
[schemaexport] TRACK_ID INTEGER NOT NULL AUTO_INCREMENT,
[schemaexport] title VARCHAR(255) not null,
[schemaexport] filePath VARCHAR(255) not null,
[schemaexport] playTime TIME,
[schemaexport] added DATE,
[schemaexport] volume SMALLINT,
[schemaexport] primary key (Track_id)
[schemaexport] )
[schemaexport] 23:02:18,311 INFO SchemaExport:160 - schema export complete
[schemaexport] 23:02:18,374 INFO DriverManagerConnectionProvider:137 - cleaning
up connection pool: jdbc:mysql://slant.reseune.pvt/notebook_db
BUILD SUCCESSFUL
Total time: 9 seconds
Example 2-11. Checking the newly created MySQL schema
% mysql -u jim -p
Enter password:
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 772 to server version: 3.23.44-Max-log
Type 'help;' or '\h' for help. Type '\c' to clear the buffer.
mysql> USE notebook_db
Database changed
mysql> SHOW TABLES;
+-----------------------+
| Tables_in_notebook_db |
+-----------------------+
| TRACK |
+-----------------------+
1 row in set (0.03 sec)
mysql> DESCRIBE TRACK;
+--------------+---------------+-------+-------+----------+----------------+
| Field | Type | Null | Key | Default | Extra |
+--------------+---------------+------ +-------+----------+----------------+
| TRACK_ID | int(11) | | PRI | NULL | auto_increment |
| title | varchar(255) | | | | |
| filePath | varchar(255) | | | | |
| playTime | time | YES | | NULL | |
| added | date | YES | | NULL | |
| volume | smallint(6) | YES | | NULL | |
+--------------+---------------+-------+-------+----------+----------------+
6 rows in set (0.02 sec)
mysql> SELECT * FROM TRACK;
Empty set (0.00 sec)
mysql> quit;
Bye
It's not surprising to find the table empty. We'll investigate how to populate it with data in the first part of Chapter 3.
If you've followed this example and set up a MySQL database, and you'd prefer to continue working with it throughout the rest of the book, feel free to do so, but bear in mind you'll need to know how to look at the results of the examples yourself. The text will assume you're still working with HSQLDB, and it will show you how to check your progress in that context. You will also see slight differences in the schema, as databases all have slightly different column types and features. Apart from these minor details, it really makes no difference what database you're using—that's part of the appeal of an O/R mapping layer like Hibernate.
There are many different ways of doing this, ranging from completely manual database design and coding, to highly automated tools. The general problem is known as Object/Relational Mapping, and Hibernate is a lightweight O/R mapping service for Java.
The 'lightweight' designation means it is designed to be fairly simple to learn and use, and to place reasonable demands on system resources, compared to some of the other available tools. Despite this, it manages to be broadly useful and deep. The designers have done a good job of figuring out the kinds of things that real projects need to accomplish, and supporting them well.
You can use Hibernate in many different ways, depending on what you're starting with. If you've got a database that you need to interact with, there are tools that can analyze the existing schema as a starting point for your mapping, and help you write the Java classes to represent the data. If you've got classes that you want to store in a new database, you can start with the classes, get help building a mapping document, and generate an initial database schema. We'll look at some of these approaches later.
For now, we're going to see how you can start a brand new project, with no existing classes or data, and have Hibernate help you build both. When starting from scratch like this, the most convenient place to begin is in the middle, with an abstract definition of the mapping we're going to make between program objects and the database tables that will store them.
2.1 Writing a Mapping Document
Hibernate uses an XML document to track the mapping between Java classes and relational database tables. This mapping document is designed to be readable and hand-editable. You can also start by using graphical CASE tools (like Together, Rose, or Poseidon) to build UML diagrams representing your data model, and feed these into AndroMDA ( www.andromda.org/ ), turning them into Hibernate mappings.
NOTE
Don't forget that Hibernate and its extensions let you work in other ways, starting with classes or data if you've got them.
We'll write one by hand, showing it's quite practical.
We're going to start by writing a mapping document for tracks, pieces of music that can be listened to individually or as part of an album or play list. To begin with, we'll keep track of the track's title, the path to the file containing the actual music, its playing time, the date on which it was added to the database, and the volume at which it should be played (in case the default volume isn't appropriate because it was recorded at a very different level than other music in the database).
2.1.1 Why do I care?
You might not have any need for a new system to keep track of your music, but the concepts and process involved in setting up this mapping will translate to the projects you actually want to tackle.
2.1.2 How do I do that?
Fire up your favorite text editor, and create the file Track.hbm.xml in the src/com/oreilly/hh directory you set up in the previous Chapter. (If you skipped that chapter, you'll need to go back and follow it, because this example relies on the project structure and tools we set up there.) Type in the mapping document as shown in Example 2-1. Or, if you'd rather avoid all that typing, download the code examples from this book's web site, and find the mapping file in the directory for Chapter 2.
Example 2-1. The mapping document for tracks, Track.hbm.xml
1 <?xml version="1.0"?>
2 <!DOCTYPE hibernate-mapping
3 PUBLIC "-//Hibernate/Hibernate Mapping DTD 2.0//EN"
4 "http://hibernate.sourceforge.net/hibernate-mapping-2.0.dtd">
5 <hibernate-mapping>
6
7 <class name="com.oreilly.hh.Track" table="TRACK">
8 <meta attribute="class-description">
9 Represents a single playable track in the music database.
10 @author Jim Elliott (with help from Hibernate)
11 </meta>
12
13 <id name="id" type="int" column="TRACK_ID">
14 <meta attribute="scope-set">protected</meta>
15 <generator class="native"/>
16 </id>
17
18 <property name="title" type="string" not-null="true"/>
19
20 <property name="filePath" type="string" not-null="true"/>
21
22 <property name="playTime" type="time">
23 <meta attribute="field-description">Playing time</meta>
24 </property>
25
26 <property name="added" type="date">
27 <meta attribute="field-description">When the track was created</meta>
28 </property>
29
30 <property name="volume" type="short">
31 <meta attribute="field-description">How loud to play the track</meta>
32 </property>
33
34 </class>
35 </hibernate-mapping>
The first four lines are a required preamble to make this a valid XML document and announce that it conforms to the document type definition used by Hibernate for mappings. The actual mappings are inside the hibernate-mapping tag. Starting at line 7 we're defining a mapping for a single class, com.oreilly.hh.Track, and the name and package of this class are related to the name and location of the file we've created. This relationship isn't necessary; you can define mappings for any number of classes in a single mapping document, and name it and locate it anywhere you want, as long as you tell Hibernate how to find it. The advantage of following the convention of naming the mapping file after the class it maps, and placing it in the same place on the class path as that class, is that this allows Hibernate to automatically locate the mapping when you want to work with the class. This simplifies the configuration and use of Hibernate.
In the opening of the class tag on line 7, we have also specified that this class is stored in a database table named TRACK. The next tag, a meta tag (lines 8-11), doesn't directly affect the mapping. Instead, it provides additional information that can be used by different tools. In this case, by specifying an attribute value of 'class-description,' we are telling the Java code generation tool the JavaDoc text we want associated with the Track class. This is entirely optional, and you'll see the result of including it in the upcoming section, 'Generating Some Class.'
Although databases vary in terms of whether they keep track of the capitalization of table and column names, this book will use the convention of referring to these database entities in all-caps, to help clarify when something being discussed is a database column or table, as opposed to a persistent Java class or property.
The remainder of the mapping sets up the pieces of information we want to keep track of, as properties in the class and their associated columns in the database table. Even though we didn't mention it in the introduction to this example, each track is going to need an id. Following database best practices, we'll use a meaningless surrogate key (a value with no semantic meaning, serving only to identify a specific database row). In Hibernate, the key/id mapping is set up using an id tag (starting at line 13). We're choosing to use an int to store our id in the database column TRACK_ID, which will correspond to the property id in our Track object. This mapping contains another meta tag to communicate with the Java code generator, telling it that the set method for the id property should be protected—there's no need for application code to go changing track IDs.
The generator tag on line 15 configures how Hibernate creates id values for new instances. (Note that it relates to normal O/R mapping operation, not to the Java code generator, which is often not even used; generator is more fundamental than the optional meta tags.) There are a number of different ID generation strategies to choose from, and you can even write your own. In this case, we're telling Hibernate to use whatever is most natural for the underlying database (we'll see later on how it learns what database we're using). In the case of HSQLDB, an identity column is used.
After the id, we just enumerate the various track properties we care about. The title (line 18) is a string, and it cannot be null. The filePath (line 20) has the same characteristics, while the remainder are allowed to be null: playTime (line 22) is a time, added (line 26) is a date, and volume (line 30) is a short. These last three properties use a new kind of meta attribute, 'field-description,' which specifies JavaDoc text for the individual properties, with some limitations in the current code generator.
NOTE
You may be thinking there's a lot of dense information in this file. That's true, and as you'll see, it can be used to create a bunch of useful project resources.
2.1.3 What just happened?
We took the abstract description of the information about music tracks that we wanted to represent in our Java code and database, and turned it into a rigorous specification in the format that Hibernate can read. Hopefully you'll agree that it's a pretty compact and readable representation of the information. Next we'll look at what Hibernate can actually do with it.
2.2 Generating Some Class
Our mapping contains information about both the database and the Java class between which it maps. We can use it to help us create both. Let's look at the class first.
Example 2-2. The Ant build file updated for code generation
1 <project name="Harnessing Hibernate: The Developer's Notebook"
2 default="db" basedir=".">
3 <!-- Set up properties containing important project directories -->
4 <property name="source.root" value="src"/>
5 <property name="class.root" value="classes"/>
6 <property name="lib.dir" value="lib"/>
7 <property name="data.dir" value="data"/>
8
9 <!-- Set up the class path for compilation and execution -->
10 <path id="project.class.path">
11 <!-- Include our own classes, of course -->
12 <pathelement location="${class.root}" />
13 <!-- Include jars in the project library directory -->
14 <fileset dir="${lib.dir}">
15 <include name="*.jar"/>
16 </fileset>
17 </path>
18
19 <target name="db" description="Runs HSQLDB database management UI
20 against the database file--use when application is not running">
21 <java classname="org.hsqldb.util.DatabaseManager"
22 fork="yes">
23 <classpath refid="project.class.path"/>
24 <arg value="-driver"/>
25 <arg value="org.hsqldb.jdbcDriver"/>
26 <arg value="-url"/>
27 <arg value="jdbc:hsqldb:${data.dir}/music"/>
28 <arg value="-user"/>
29 <arg value="sa"/>
30 </java>
31 </target>
32
33 <!-- Teach Ant how to use Hibernate's code generation tool -->
34 <taskdef name="hbm2java"
35 classname="net.sf.hibernate.tool.hbm2java.Hbm2JavaTask"
36 classpathref="project.class.path"/>
37
38 <!-- Generate the java code for all mapping files in our source tree -->
39 <target name="codegen"
40 description="Generate Java source from the O/R mapping files">
41 <hbm2java output="${source.root}">
42 <fileset dir="${source.root}">
43 <include name="**/*.hbm.xml"/>
44 </fileset>
45 </hbm2java>
46 </target>
47
48 </project>
We added a taskdef (task definition) and a new target to the build file. The task definition at line 33 teaches Ant a new trick: it tells Ant how to use the hbm2java tool that is part of the Hibernate Extensions, with the help of a class provided for this purpose. Note that it also specifies the class path to be used when invoking this tool, using the project.class.path definition found earlier in the file.
The codegen target at line 38 uses the new hbm2java task to run Hibernate's code generator on any mapping documents found in the src tree, writing the corresponding Java source. The pattern '**/*.hbm.xml' means 'any file ending in .hbm.xml, within the specified directory, or any subdirectory, however deeply nested.'
Let's try it! From within your top-level project directory (the folder containing build.xml), type the following command:
ant codegen
You should see output like this:
Buildfile: build.xml
codegen:
[hbm2java] Processing 1 files.
[hbm2java] Building hibernate objects
[hbm2java] log4j:WARN No appenders could be found for logger (net.sf.
hibernate.util.DTDEntityResolver).
[hbm2java] log4j:WARN Please initialize the log4j system properly.
The warnings are griping about the fact that we haven't taken the trouble to set up the logging environment that Hibernate expects. We'll see how to do that in the next example. For now, if you look in the directory src/com/oreilly/hh, you'll see that a new file named Track.java has appeared, with the content shown in Example 2-3.
Example 2-3. Code generated from the Track mapping document
1 package com.oreilly.hh;
2
3 import java.io.Serializable;
4 import java.util.Date;
5 import org.apache.commons.lang.builder.EqualsBuilder;
6 import org.apache.commons.lang.builder.HashCodeBuilder;
7 import org.apache.commons.lang.builder.ToStringBuilder;
8
9 /**
10 * Represents a single playable track in the music database.
11 * @author Jim Elliott (with help from Hibernate)
12 *
13 */
14 public class Track implements Serializable {
15
16 /** identifier field */
17 private Integer id;
18
19 /** persistent field */
20 private String title;
21
22 /** persistent field */
23 private String filePath;
24
25 /** nullable persistent field */
26 private Date playTime;
27
28 /** nullable persistent field */
29 private Date added;
30
31 /** nullable persistent field */
32 private short volume;
33
34 /** full constructor */
35 public Track(String title, String filePath, Date playTime,
Date added, short volume) {
36 this.title = title;
37 this.filePath = filePath;
38 this.playTime = playTime;
39 this.added = added;
40 this.volume = volume;
41 }
42
43 /** default constructor */
44 public Track() {
45 }
46
47 /** minimal constructor */
48 public Track(String title, String filePath) {
49 this.title = title;
50 this.filePath = filePath;
51 }
52
53 public Integer getId() {
54 return this.id;
55 }
56
57 protected void setId(Integer id) {
58 this.id = id;
59 }
60
61 public String getTitle() {
62 return this.title;
63 }
64
65 public void setTitle(String title) {
66 this.title = title;
67 }
68
69 public String getFilePath() {
70 return this.filePath;
71 }
72
73 public void setFilePath(String filePath) {
74 this.filePath = filePath;
75 }
76
77 /**
78 * Playing time
79 */
80 public Date getPlayTime() {
81 return this.playTime;
82 }
83
84 public void setPlayTime(Date playTime) {
85 this.playTime = playTime;
86 }
87
88 /**
89 * When the track was created
90 */
91 public Date getAdded() {
92 return this.added;
93 }
94
95 public void setAdded(Date added) {
96 this.added = added;
97 }
98
99 /**
100 * How loud to play the track
101 */
102 public short getVolume() {
103 return this.volume;
104 }
105
106 public void setVolume(short volume) {
107 this.volume = volume;
108 }
109
110 public String toString() {
111 return new ToStringBuilder(this)
112 .append("id", getId())
113 .toString();
114 }
115
116 public boolean equals(Object other) {
117 if ( !(other instanceof Track) ) return false;
118 Track castOther = (Track) other;
119 return new EqualsBuilder()
120 .append(this.getId(), castOther.getId())
121 .isEquals();
122 }
123
124 public int hashCode() {
125 return new HashCodeBuilder()
126 .append(getId())
127 .toHashCode();
128 }
129
130 }
2.2.2 What just happened?
Ant found all files in our source tree ending in .hbm.xml (just one, so far) and fed it to the Hibernate code generator, which analyzed it, and wrote a Java class meeting the specifications we provided for the Track mapping.
NOTE
That can save a lot of time and fairly repetitive activity. I could get used to it.
You may find it worthwhile to compare the generated Java source with the mapping specification from which it arose (Example 2-1). The source starts out with the proper package declaration, which is easy for hbm2java to figure out from the fully qualified class name required in the mapping file. There are a couple of imports to make the source more readable. The three potentially unfamiliar entries (lines 5-7) are utilities from the Jakarta Commons project that help in the creation of correctly implemented and useful toString(), equals(), and hashCode() methods.
The class-level JavaDoc at line 10 should look familiar, since it comes right from the 'class-description' meta tag in our mapping document. The field declarations are derived from the id (line 17) and property (lines 20-32) tags defined in the mapping. The Java types used are derived from the property types in the mapping document. We'll delve into the full set of value types supported by Hibernate later on. For now, the relationship between the types in the mapping document and the Java types used in the generated code should be fairly clear.
One curious detail is that an Integer wrapper has been used for id, while volume is declared as a simple, unwrapped short. Why the difference? It relates to the fact that the ID/key property has many important roles to play in the O/R mapping process (which is why it gets a special XML tag in the mapping document, rather than being just another property). Although we left it out in our specification, one of the choices you need to make when setting up an ID is to pick a special value to indicate that a particular instance has not yet been saved into the database. Leaving out this unsaved-value attribute, as we did, tells Hibernate to use its default interpretation, which is that unsaved values are indicated by an ID of null. Since native int values can't be null, they must be wrapped in a java.lang.Integer, and Hibernate took care of this for us.
When it comes to the volume property, Hibernate has no special need or use for it, so it trusts us to know what we're doing. If we want to be able to store null values for volume, perhaps to indicate 'no change,' we need to explicitly use java.lang.Short rather than short in our mapping document. (Had we not been sneakily pointing out this difference, our example would be better off explicitly using java.lang.Integer in our ID mapping too, just for clarity.)
NOTE
I know, I'm a perfectionist. I only bother to pick nits because I think Hibernate is so useful!
Another thing you might notice about these field declarations is that their JavaDoc is quite generic—you may be wondering what happened to the 'field-description' meta tags we put in the mapping document for playTime, added and volume. It turns out they appear only later, in the JavaDoc for the getter methods. They are not used in the setters, the actual field declarations, nor as @param entries for the constructor. As an avid user of a code-completing Java editor, I count on pop-up JavaDoc as I fill in arguments to method calls, so I'm a little disappointed by this limitation. Of course, since this is an open source project, any of us can get involved and propose or undertake this simple fix. Indeed, you may find this already remedied by the time you read this book. Once robust field and parameter documentation is in place, I'd definitely advocate always providing a brief but accurate field-description entry for your properties.
After the field declarations come a trio of constructors. The first (line 35) establishes values for all properties, the second (line 44) allows instantiation without any arguments (this is required if you want the class to be usable as a bean, such as on a Java Server Page, a very common use for data classes like this), and the last (line 48) fills in just the values we've indicated must not be null. Notice that none of the constructors set the value of id; this is the responsibility of Hibernate when we get the object out of the database, or insert it for the first time.
Consistent with that, the setId() method on line 57 is protected, as requested in our id mapping. The rest of the getters and setters are not surprising; this is all pretty much boilerplate code (which we've all written too many times), which is why it's so nice to be able to have the Hibernate extensions generate it for us.
If you want to use Hibernate's generated code as a starting point and then add some business logic or other features to the generated class, be aware that all your changes will be silently discarded the next time you run the code generator. In such a project you will want to be sure the hand-tweaked classes are not regenerated by any Ant build target.
Even though we're having Hibernate generate our data classes in this example, it's important to point out that the getters and setters it creates are more than a nice touch. You need to put these in your persistent classes for any properties you want to persist, since Hibernate's fundamental persistence architecture is based on reflective access to Java- Beans™-style properties. They don't need to be public if you don't want them to; Hibernate has ways of getting at even properties declared protected or private, but they do need accessor methods. Think of it as enforcing good object design; the Hibernate team wants to keep the implementation details of actual instance variables cleanly separated from the persistence mechanism.
2.3 Cooking Up a Schema
That was pretty easy, wasn't it? You'll be happy to learn that creating database tables is a very similar process. As with code generation, you've already done most of the work in coming up with the mapping document. All that's left is to set up and run the schema generation tool.
Example 2-4. Setting up hibernate.properties
hibernate.dialect=net.sf.hibernate.dialect.HSQLDialect
hibernate.connection.driver_class=org.hsqldb.jdbcDriver
hibernate.connection.url=jdbc:hsqldb:data/music
hibernate.connection.username=sa
hibernate.connection.password=
In addition to establishing the SQL dialect we are using, this tells Hibernate how to establish a connection to the database using the JDBC driver that ships as part of the HSQLDB database JAR archive, and that the data should live in the data directory we've created—in the database named music. The username and empty password (indeed, all these values) should be familiar from the experiment we ran at the end of Chapter 1.
Notice that we're using a relative path to specify the database filename. This works fine in our examples—we're using ant to control the working directory. If you copy this for use in a web application or other environment, though, you'll likely need to be more explicit about the location of the file.
You can put the properties file in other places, and give it other names, or use entirely different ways of getting the properties into Hibernate, but this is the default place it will look, so it's the path of least resistance (or, I guess, least runtime configuration).
We also need to add some new pieces to our build file, shown in Example 2-5. This is a somewhat substantial addition, because we need to compile our Java source in order to use the schema generation tool, which relies on reflection to get its details right. Add these targets right before the closing </project> tag at the end of build.xml.
Example 2-5. Ant build file additions for compilation and schema generation
1 <!-- Create our runtime subdirectories and copy resources into them -->
2 <target name="prepare" description="Sets up build structures">
3 <mkdir dir="${class.root}"/>
4
5 <!-- Copy our property files and O/R mappings for use at runtime -->
6 <copy todir="${class.root}" >
7 <fileset dir="${source.root}" >
8 <include name="**/*.properties"/>
9 <include name="**/*.hbm.xml"/>
10 </fileset>
11 </copy>
12 </target>
13
14 <!-- Compile the java source of the project -->
15 <target name="compile" depends="prepare"
16 description="Compiles all Java classes">
17 <javac srcdir="${source.root}"
18 destdir="${class.root}"
19 debug="on"
20 optimize="off"
21 deprecation="on">
22 <classpath refid="project.class.path"/>
23 </javac>
24 </target>
25
26 <!-- Generate the schemas for all mapping files in our class tree -->
27 <target name="schema" depends="compile"
28 description="Generate DB schema from the O/R mapping files">
29
30 <!-- Teach Ant how to use Hibernate's schema generation tool -->
31 <taskdef name="schemaexport"
32 classname="net.sf.hibernate.tool.hbm2ddl.SchemaExportTask"
33 classpathref="project.class.path"/>
34
35 <schemaexport properties="${class.root}/hibernate.properties"
36 quiet="no" text="no" drop="no" delimiter=";">
37 <fileset dir="${class.root}">
38 <include name="**/*.hbm.xml"/>
39 </fileset>
40 </schemaexport>
41 </target>
First we add a prepare target that is intended to be used by other targets more than from the command line. Its purpose is to create, if necessary, the classes directory into which we're going to compile, and then copy any properties and mapping files found in the src directory hierarchy to corresponding directories in the classes hierarchy. This hierarchical copy operation (using the special '**/*' pattern) is a nice feature of Ant, enabling us to define and edit resources alongside to the source files that use them, while making those resources available at runtime via the class loader.
The aptly named compile target at line 14 uses the built-in java task to compile all the Java source files found in the src tree to the classes tree. Happily, this task also supports the project class path we've set up, so the compiler can find all the libraries we're using. The depends="prepare" attribute in the target definition tells Ant that before running the compile target, prepare must be run. Ant manages dependencies so that when you're building multiple targets with related dependencies, they are executed in the right order, and each dependency gets executed only once, even if it is mentioned by multiple targets.
If you're accustomed to using shell scripts to compile a lot of Java source, you'll be surprised by how quickly the compilation happens. Ant invokes the Java compiler within the same virtual machine that it is using, so there is no process startup delay for each compilation.
Finally, after all this groundwork, we can write the target we really wanted to! The schema target (line 26) depends on compile, so all our Java classes will be compiled and available for inspection when the schema generator runs. It uses taskdef internally at line 31 to define the schemaexport task that runs the Hibernate schema export tool, in the same way we provided access to the code generation tool at the top of the file. It then invokes this tool and tells it to generate the database schema associated with any mapping documents found in the classes tree.
There are a number of parameters you can give the schema export tool to configure the way it works. In this example (at line 35) we're telling it to display the SQL it runs so we can watch what it's doing (quiet="no"), to actually interact with the database and create the schema rather than simply writing out a DDL file we could import later or simply deleting the schema (text="no", drop="no"). For more details about these and other configuration options, consult the Hibernate reference manual.
You may be wondering why the taskdef for the schema update tool is inside our schema target, rather than at the top of the build file, next to the one for hbm2java. Well, I wanted it up there too, but I ran into a snag that's worth explaining. I got strange error messages the first time I tried to build the schema target, complaining there was no hibernate.properties on the class path and our compiled Track class couldn't be found. When I ran it again, it worked. Some detective work using ant -verbose revealed that if the classes directory didn't exist when the taskdef was encountered, Ant helpfully removed it from the class path. Since a taskdef can't have its own dependencies, the solution is to move it into the schema target, giving it the benefit of that target's dependencies, ensuring the classes directory exists by the time the taskdef is processed.
With these additions, we're ready to generate the schema for our TRACK table.
You might think the drop="no" setting in our schema task means you can use it to update the schema—it won't drop the tables, right? Alas, this is a misleading parameter name: it means it won't just drop the tables, rather it will go ahead and generate the schema after dropping them. Much as you want to avoid the codegen task after making any changes to the generated Java source, you mustn't export the schema if you've put any data into the database. Luckily, there is another tool you can use for incremental schema updates that works much the same way, as long as your JDBC driver is powerful enough. This SchemaUpdate tool can be used with an Ant taskdef too.
Because we've asked the schema export task not to be 'quiet,' we want it to generate some log entries for us. In order for that to work, we need to configure log4j, the logging environment used by Hibernate. The easiest way to do this is to make a log4j.properties file available at the root of the class path. We can take advantage of our existing prepare target to copy this from the src to the classes directory at the same time it copies Hibernate's properties. Create a file named log4j.properties in the src directory with the content shown in Example 2-6. An easy way to do this is to copy the file out of the src directory in the Hibernate distribution you downloaded, since it's provided for use by their own examples. If you're typing it in yourself, you can skip the blocks that are commented out; they are provided to suggest useful logging alternatives.
Example 2-6. The logging configuration file, log4j.properties
### direct log messages to stdout ###
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.Target=System.out
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%d{ABSOLUTE} %5p %c{1}:%L - %m%n
### direct messages to file hibernate.log ###
#log4j.appender.file=org.apache.log4j.FileAppender
#log4j.appender.file.File=hibernate.log
#log4j.appender.file.layout=org.apache.log4j.PatternLayout
#log4j.appender.file.layout.ConversionPattern=%d{ABSOLUTE} %5p %c{1}:%L - %m%n
### set log levels - for more verbose logging change 'info' to 'debug' ###
log4j.rootLogger=warn, stdout
log4j.logger.net.sf.hibernate=info
### log just the SQL
#log4j.logger.net.sf.hibernate.SQL=debug
### log JDBC bind parameters ###
log4j.logger.net.sf.hibernate.type=info
### log schema export/update ###
log4j.logger.net.sf.hibernate.tool.hbm2ddl=debug
### log cache activity ###
#log4j.logger.net.sf.hibernate.cache=debug
### enable the following line if you want to track down connection ###
### leakages when using DriverManagerConnectionProvider ###
#log4j.logger.net.sf.hibernate.connection.DriverManagerConnectionProvider=trace
With the log configuration in place, you might want to edit the codegen target in build.xml so that it, too, depends on our new prepare target. This will ensure logging is configured whenever we use it, preventing the warnings we saw when first running it. As noted in the tip about class paths and task definitions in the previous section, though, to make it work the very first time you'll have to move the taskdef for hbm2java inside the codegen target, in the same way we put schemaexport inside the schema target.
Time to make a schema! From the project directory, execute the command ant schema . You'll see output similar to Example 2-7 as the classes directory is created and populated with resources, the Java source is compiled,[2.1] and the schema generator is run.
[2.1] We're assuming you've already generated the code shown in Example 2-3, or there won't be any Java source to compile, and the schema generation will fail. The schema target doesn't invoke codegen to automatically generate code, in case you've manually extended any of your generated classes.
Example 2-7. Output from building the schema using HSQLDB's embedded database server
% ant schema
Buildfile: build.xml
prepare:
[mkdir] Created dir: /Users/jim/Documents/Work/OReilly/Hibernate/Examples/
ch02/classes
[copy] Copying 3 files to /Users/jim/Documents/Work/OReilly/Hibernate/
Examples/ch02/classes
compile:
[javac] Compiling 1 source file to /Users/jim/Documents/Work/OReilly/
Hibernate/Examples/ch02/classes
schema:
[schemaexport] 23:50:36,165 INFO Environment:432 - Hibernate 2.1.1
[schemaexport] 23:50:36,202 INFO Environment:466 - loaded properties from
resource hibernate.properties: {hibernate.connection.username=sa, hibernate.
connection.password=, hibernate.cglib.use_reflection_optimizer=true, hibernate.
dialect=net.sf.hibernate.dialect.HSQLDialect, hibernate.connection.url=jdbc:
hsqldb:data/music, hibernate.connection.driver_class=org.hsqldb.jdbcDriver}
[schemaexport] 23:50:36,310 INFO Environment:481 - using CGLIB reflection
optimizer
[schemaexport] 23:50:36,384 INFO Configuration:166 - Mapping file: /Users/jim/
Documents/Work/OReilly/Hibernate/Examples/ch02/classes/com/oreilly/hh/Track.hbm.
xml
[schemaexport] 23:50:37,409 INFO Binder:225 - Mapping class: com.oreilly.hh.
Track -> TRACK
[schemaexport] 23:50:37,928 INFO Dialect:82 - Using dialect: net.sf.hibernate.
dialect.HSQLDialect
[schemaexport] 23:50:37,942 INFO Configuration:584 - processing one-to-many
association mappings
[schemaexport] 23:50:37,947 INFO Configuration:593 - processing one-to-one
association property references
[schemaexport] 23:50:37,956 INFO Configuration:618 - processing foreign key
constraints
[schemaexport] 23:50:38,113 INFO Configuration:584 - processing one-to-many
association mappings
[schemaexport] 23:50:38,124 INFO Configuration:593 - processing one-to-one
association property references
[schemaexport] 23:50:38,132 INFO Configuration:618 - processing foreign key
constraints
[schemaexport] 23:50:38,149 INFO SchemaExport:98 - Running hbm2ddl schema export
[schemaexport] 23:50:38,154 INFO SchemaExport:117 - exporting generated schema
to database
[schemaexport] 23:50:38,232 INFO DriverManagerConnectionProvider:41 - Using
Hibernate built-in connection pool (not for production use!)
[schemaexport] 23:50:38,238 INFO DriverManagerConnectionProvider:42 - Hibernate
connection pool size: 20
[schemaexport] 23:50:38,278 INFO DriverManagerConnectionProvider:71 - using
driver: org.hsqldb.jdbcDriver at URL: jdbc:hsqldb:data/music
[schemaexport] 23:50:38,283 INFO DriverManagerConnectionProvider:72 -connection
properties: {user=sa, password=}
[schemaexport] drop table TRACK if exists
[schemaexport] 23:50:39,083 DEBUG SchemaExport:132 - drop table TRACK if exists
[schemaexport] create table TRACK (
[schemaexport] TRACK_ID INTEGER NOT NULL IDENTITY,
[schemaexport] title VARCHAR(255) not null,
[schemaexport] filePath VARCHAR(255) not null,
[schemaexport] playTime TIME,
[schemaexport] added DATE,
[schemaexport] volume SMALLINT
[schemaexport] )
[schemaexport] 23:50:39,113 DEBUG SchemaExport:149 - create table TRACK (
[schemaexport] TRACK_ID INTEGER NOT NULL IDENTITY,
[schemaexport] title VARCHAR(255) not null,
[schemaexport] filePath VARCHAR(255) not null,
[schemaexport] playTime TIME,
[schemaexport] added DATE,
[schemaexport] volume SMALLINT
[schemaexport] )
[schemaexport] 23:50:39,142 INFO SchemaExport:160 - schema export complete
[schemaexport] 23:50:39,178 INFO DriverManagerConnectionProvider:137 - cleaning
up connection pool: jdbc:hsqldb:data/music
BUILD SUCCESSFUL
Total time: 10 seconds
Toward the end of the schemaexport section you can see the actual SQL used by Hibernate to create the TRACK table. If you look at the start of the music.script file in the data directory, you'll see it's been incorporated into the database. For a slightly more friendly (and perhaps convincing) way to see it, execute ant db to fire up the HSQLDB graphical interface, as shown in Figure 2-1.
Figure 2-1. The database interface with our new TRACK table expanded, and a query
2.3.2 What just happened?
We were able to use Hibernate to create a data table in which we can persist instances of the Java class it created for us. We didn't have to type a single line of SQL or Java! Of course, our table is still empty at this point. Let's change that! The next chapter will look at the stuff you probably most want to see: using Hibernate from within a Java program to turn objects into database entries and vice versa.
NOTE
It's about time? Yeah, I suppose. But at least you didn't have to figure out all these steps from scratch!
Before diving into that cool task, it's worth taking a moment to reflect on how much we've been able to accomplish with a couple of XML and properties files. Hopefully you're starting to see the power and convenience that make Hibernate so exciting.
2.4 Connecting Hibernate to MySQL
Example 2-8. Setting up the MySQL database notebook_db as a Hibernate playground
% mysql -u root -p
Enter password:
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 764 to server version: 3.23.44-Max-log
Type 'help;' or '\h' for help. Type '\c' to clear the buffer.
mysql> CREATE DATABASE notebook_db;
Query OK, 1 row affected (0.00 sec)
mysql> GRANT ALL ON notebook_db.* TO jim IDENTIFIED BY "s3cret";
Query OK, 0 rows affected (0.20 sec)
mysql> quit;
Bye
NOTE
Hopefully you'll use a less guessable password than this in your real databases!
Make a note of the database name you create, as well as the username and password that can access to it. These will need to be entered into hibernate.properties, as shown in Example 2-9.
Next, you'll need a JDBC driver capable of connecting to MySQL. If you're already using MySQL for your Java projects, you'll have one. Otherwise, you can download Connector/J from www.mysql.com/downloads/api-jdbc-stable.html . However you obtain it, copy the driver library jar (which will be named something like mysql-connector-java-3.0.10-stable-bin.jar) to your project's lib directory alongside the HSQLDB, Hibernate, and other libraries that are already there. It's fine to have drivers for several different databases available to your code; they won't conflict with each other, since the configuration file specifies which driver class to use.
Speaking of which, it's time to edit hibernate.properties to use the new driver and database we've just made available. Example 2-9 shows how it is set up to connect to my MySQL instance using the database created in Example 2-8. You'll need to tweak these values to correspond to your own server, database, and the login credentials you chose. (If you're using MM.MySQL, the older incarnation of the MySQL JDBC driver, the driver_class will need to be com.mysql.jdbc.Driver.)
Example 2-9. Changes to hibernate.properties to connect to the new MySQL database
hibernate.dialect=net.sf.hibernate.dialect.MySQLDialect
hibernate.connection.driver_class=com.mysql.jdbc.Driver
hibernate.connection.url=jdbc:mysql://slant.reseune.pvt/notebook_db
hibernate.connection.username=jim
hibernate.connection.password=s3cret
The URL on the third line will need to reflect your server; you won't be able to resolve my private internal domain name, let alone route to it.
Once this is all set, you can rerun the schema creation example that was set up in the previous section. This time it will build the schema on your MySQL server rather than in the embedded HSQLDB world. You'll see output like that in Example 2-10.
Example 2-10. Schema creation when connecting to MySQL
% ant schema
Buildfile: build.xml
prepare:
compile:
schema:
[schemaexport] 23:02:13,614 INFO Environment:462 - Hibernate 2.1.2
[schemaexport] 23:02:13,659 INFO Environment:496 - loaded properties from
resource hibernate.properties: {hibernate.connection.username=jim, hibernate.
connection.password=s3cret, hibernate.cglib.use_reflection_optimizer=true,
hibernate.dialect=net.sf.hibernate.dialect.MySQLDialect, hibernate.connection.
url=jdbc:mysql://slant.reseune.pvt/notebook_db, hibernate.connection.driver_
class=com.mysql.jdbc.Driver}
[schemaexport] 23:02:13,711 INFO Environment:519 - using CGLIB reflection
optimizer
[schemaexport] 23:02:13,819 INFO Configuration:166 - Mapping file: /Users/jim/
Documents/Work/OReilly/Hibernate/Examples/ch02/classes/com/oreilly/hh/Track.hbm.xml
[schemaexport] 23:02:15,568 INFO Binder:229 - Mapping class: com.oreilly.hh.
Track -> TRACK
[schemaexport] 23:02:16,164 INFO Dialect:82 - Using dialect: net.sf.hibernate.
dialect.MySQLDialect
[schemaexport] 23:02:16,175 INFO Configuration:595 - processing one-to-many
association mappings
[schemaexport] 23:02:16,188 INFO Configuration:604 - processing one-to-one
association property references
[schemaexport] 23:02:16,209 INFO Configuration:629 - processing foreign key
constraints
[schemaexport] 23:02:16,429 INFO Configuration:595 - processing one-to-many
association mappings
[schemaexport] 23:02:16,436 INFO Configuration:604 - processing one-to-one
association property references
[schemaexport] 23:02:16,440 INFO Configuration:629 - processing foreign key
constraints
[schemaexport] 23:02:16,470 INFO SchemaExport:98 - Running hbm2ddl schema export
[schemaexport] 23:02:16,488 INFO SchemaExport:117 - exporting generated schema
to database
[schemaexport] 23:02:16,543 INFO DriverManagerConnectionProvider:41 - Using
Hibernate built-in connection pool (not for production use!)
[schemaexport] 23:02:16,549 INFO DriverManagerConnectionProvider:42 - Hibernate
connection pool size: 20
[schemaexport] 23:02:16,583 INFO DriverManagerConnectionProvider:71 - using
driver: com.mysql.jdbc.Driver at URL: jdbc:mysql://slant.reseune.pvt/notebook_db
[schemaexport] 23:02:16,597 INFO DriverManagerConnectionProvider:72 -connection
properties: {user=jim, password=s3cret}
[schemaexport] drop table if exists TRACK
[schemaexport] 23:02:18,129 DEBUG SchemaExport:132 - drop table if exists TRACK
[schemaexport] create table TRACK (
[schemaexport] TRACK_ID INTEGER NOT NULL AUTO_INCREMENT,
[schemaexport] title VARCHAR(255) not null,
[schemaexport] filePath VARCHAR(255) not null,
[schemaexport] playTime TIME,
[schemaexport] added DATE,
[schemaexport] volume SMALLINT,
[schemaexport] primary key (Track_id)
[schemaexport] )
[schemaexport] 23:02:18,181 DEBUG SchemaExport:149 - create table TRACK (
[schemaexport] TRACK_ID INTEGER NOT NULL AUTO_INCREMENT,
[schemaexport] title VARCHAR(255) not null,
[schemaexport] filePath VARCHAR(255) not null,
[schemaexport] playTime TIME,
[schemaexport] added DATE,
[schemaexport] volume SMALLINT,
[schemaexport] primary key (Track_id)
[schemaexport] )
[schemaexport] 23:02:18,311 INFO SchemaExport:160 - schema export complete
[schemaexport] 23:02:18,374 INFO DriverManagerConnectionProvider:137 - cleaning
up connection pool: jdbc:mysql://slant.reseune.pvt/notebook_db
BUILD SUCCESSFUL
Total time: 9 seconds
Example 2-11. Checking the newly created MySQL schema
% mysql -u jim -p
Enter password:
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 772 to server version: 3.23.44-Max-log
Type 'help;' or '\h' for help. Type '\c' to clear the buffer.
mysql> USE notebook_db
Database changed
mysql> SHOW TABLES;
+-----------------------+
| Tables_in_notebook_db |
+-----------------------+
| TRACK |
+-----------------------+
1 row in set (0.03 sec)
mysql> DESCRIBE TRACK;
+--------------+---------------+-------+-------+----------+----------------+
| Field | Type | Null | Key | Default | Extra |
+--------------+---------------+------ +-------+----------+----------------+
| TRACK_ID | int(11) | | PRI | NULL | auto_increment |
| title | varchar(255) | | | | |
| filePath | varchar(255) | | | | |
| playTime | time | YES | | NULL | |
| added | date | YES | | NULL | |
| volume | smallint(6) | YES | | NULL | |
+--------------+---------------+-------+-------+----------+----------------+
6 rows in set (0.02 sec)
mysql> SELECT * FROM TRACK;
Empty set (0.00 sec)
mysql> quit;
Bye
It's not surprising to find the table empty. We'll investigate how to populate it with data in the first part of Chapter 3.
If you've followed this example and set up a MySQL database, and you'd prefer to continue working with it throughout the rest of the book, feel free to do so, but bear in mind you'll need to know how to look at the results of the examples yourself. The text will assume you're still working with HSQLDB, and it will show you how to check your progress in that context. You will also see slight differences in the schema, as databases all have slightly different column types and features. Apart from these minor details, it really makes no difference what database you're using—that's part of the appeal of an O/R mapping layer like Hibernate.
Subscribe to:
Posts (Atom)