Wednesday, 26 September 2012

Introduction to Mapping

For information we need to keep around between runs, or share between different programs and systems, relational databases have proven to be hard to beat. They're scalable, reliable, efficient, and extremely flexible. So what we need is a means of taking information from a SQL database and turning it into Java objects, and vice versa.

There are many different ways of doing this, ranging from completely manual database design and coding, to highly automated tools. The general problem is known as Object/Relational Mapping, and Hibernate is a lightweight O/R mapping service for Java.

The 'lightweight' designation means it is designed to be fairly simple to learn and use, and to place reasonable demands on system resources, compared to some of the other available tools. Despite this, it manages to be broadly useful and deep. The designers have done a good job of figuring out the kinds of things that real projects need to accomplish, and supporting them well.

You can use Hibernate in many different ways, depending on what you're starting with. If you've got a database that you need to interact with, there are tools that can analyze the existing schema as a starting point for your mapping, and help you write the Java classes to represent the data. If you've got classes that you want to store in a new database, you can start with the classes, get help building a mapping document, and generate an initial database schema. We'll look at some of these approaches later.

For now, we're going to see how you can start a brand new project, with no existing classes or data, and have Hibernate help you build both. When starting from scratch like this, the most convenient place to begin is in the middle, with an abstract definition of the mapping we're going to make between program objects and the database tables that will store them.


2.1 Writing a Mapping Document
Hibernate uses an XML document to track the mapping between Java classes and relational database tables. This mapping document is designed to be readable and hand-editable. You can also start by using graphical CASE tools (like Together, Rose, or Poseidon) to build UML diagrams representing your data model, and feed these into AndroMDA ( www.andromda.org/ ), turning them into Hibernate mappings.

NOTE


Don't forget that Hibernate and its extensions let you work in other ways, starting with classes or data if you've got them.


We'll write one by hand, showing it's quite practical.

We're going to start by writing a mapping document for tracks, pieces of music that can be listened to individually or as part of an album or play list. To begin with, we'll keep track of the track's title, the path to the file containing the actual music, its playing time, the date on which it was added to the database, and the volume at which it should be played (in case the default volume isn't appropriate because it was recorded at a very different level than other music in the database).

2.1.1 Why do I care?
You might not have any need for a new system to keep track of your music, but the concepts and process involved in setting up this mapping will translate to the projects you actually want to tackle.

2.1.2 How do I do that?
Fire up your favorite text editor, and create the file Track.hbm.xml in the src/com/oreilly/hh directory you set up in the previous Chapter. (If you skipped that chapter, you'll need to go back and follow it, because this example relies on the project structure and tools we set up there.) Type in the mapping document as shown in Example 2-1. Or, if you'd rather avoid all that typing, download the code examples from this book's web site, and find the mapping file in the directory for Chapter 2.

Example 2-1. The mapping document for tracks, Track.hbm.xml

1 <?xml version="1.0"?>

2 <!DOCTYPE hibernate-mapping

3 PUBLIC "-//Hibernate/Hibernate Mapping DTD 2.0//EN"

4 "http://hibernate.sourceforge.net/hibernate-mapping-2.0.dtd">

5 <hibernate-mapping>

6

7 <class name="com.oreilly.hh.Track" table="TRACK">

8 <meta attribute="class-description">

9 Represents a single playable track in the music database.

10 @author Jim Elliott (with help from Hibernate)

11 </meta>

12

13 <id name="id" type="int" column="TRACK_ID">

14 <meta attribute="scope-set">protected</meta>

15 <generator class="native"/>

16 </id>

17

18 <property name="title" type="string" not-null="true"/>

19

20 <property name="filePath" type="string" not-null="true"/>

21

22 <property name="playTime" type="time">

23 <meta attribute="field-description">Playing time</meta>

24 </property>

25

26 <property name="added" type="date">

27 <meta attribute="field-description">When the track was created</meta>

28 </property>

29

30 <property name="volume" type="short">

31 <meta attribute="field-description">How loud to play the track</meta>

32 </property>

33

34 </class>

35 </hibernate-mapping>




The first four lines are a required preamble to make this a valid XML document and announce that it conforms to the document type definition used by Hibernate for mappings. The actual mappings are inside the hibernate-mapping tag. Starting at line 7 we're defining a mapping for a single class, com.oreilly.hh.Track, and the name and package of this class are related to the name and location of the file we've created. This relationship isn't necessary; you can define mappings for any number of classes in a single mapping document, and name it and locate it anywhere you want, as long as you tell Hibernate how to find it. The advantage of following the convention of naming the mapping file after the class it maps, and placing it in the same place on the class path as that class, is that this allows Hibernate to automatically locate the mapping when you want to work with the class. This simplifies the configuration and use of Hibernate.

In the opening of the class tag on line 7, we have also specified that this class is stored in a database table named TRACK. The next tag, a meta tag (lines 8-11), doesn't directly affect the mapping. Instead, it provides additional information that can be used by different tools. In this case, by specifying an attribute value of 'class-description,' we are telling the Java code generation tool the JavaDoc text we want associated with the Track class. This is entirely optional, and you'll see the result of including it in the upcoming section, 'Generating Some Class.'

Although databases vary in terms of whether they keep track of the capitalization of table and column names, this book will use the convention of referring to these database entities in all-caps, to help clarify when something being discussed is a database column or table, as opposed to a persistent Java class or property.






The remainder of the mapping sets up the pieces of information we want to keep track of, as properties in the class and their associated columns in the database table. Even though we didn't mention it in the introduction to this example, each track is going to need an id. Following database best practices, we'll use a meaningless surrogate key (a value with no semantic meaning, serving only to identify a specific database row). In Hibernate, the key/id mapping is set up using an id tag (starting at line 13). We're choosing to use an int to store our id in the database column TRACK_ID, which will correspond to the property id in our Track object. This mapping contains another meta tag to communicate with the Java code generator, telling it that the set method for the id property should be protected—there's no need for application code to go changing track IDs.

The generator tag on line 15 configures how Hibernate creates id values for new instances. (Note that it relates to normal O/R mapping operation, not to the Java code generator, which is often not even used; generator is more fundamental than the optional meta tags.) There are a number of different ID generation strategies to choose from, and you can even write your own. In this case, we're telling Hibernate to use whatever is most natural for the underlying database (we'll see later on how it learns what database we're using). In the case of HSQLDB, an identity column is used.

After the id, we just enumerate the various track properties we care about. The title (line 18) is a string, and it cannot be null. The filePath (line 20) has the same characteristics, while the remainder are allowed to be null: playTime (line 22) is a time, added (line 26) is a date, and volume (line 30) is a short. These last three properties use a new kind of meta attribute, 'field-description,' which specifies JavaDoc text for the individual properties, with some limitations in the current code generator.

NOTE


You may be thinking there's a lot of dense information in this file. That's true, and as you'll see, it can be used to create a bunch of useful project resources.


2.1.3 What just happened?
We took the abstract description of the information about music tracks that we wanted to represent in our Java code and database, and turned it into a rigorous specification in the format that Hibernate can read. Hopefully you'll agree that it's a pretty compact and readable representation of the information. Next we'll look at what Hibernate can actually do with it.



2.2 Generating Some Class
Our mapping contains information about both the database and the Java class between which it maps. We can use it to help us create both. Let's look at the class first.

Example 2-2. The Ant build file updated for code generation

1 <project name="Harnessing Hibernate: The Developer's Notebook"

2 default="db" basedir=".">

3 <!-- Set up properties containing important project directories -->

4 <property name="source.root" value="src"/>

5 <property name="class.root" value="classes"/>

6 <property name="lib.dir" value="lib"/>

7 <property name="data.dir" value="data"/>

8

9 <!-- Set up the class path for compilation and execution -->

10 <path id="project.class.path">

11 <!-- Include our own classes, of course -->

12 <pathelement location="${class.root}" />

13 <!-- Include jars in the project library directory -->

14 <fileset dir="${lib.dir}">

15 <include name="*.jar"/>

16 </fileset>

17 </path>

18

19 <target name="db" description="Runs HSQLDB database management UI

20 against the database file--use when application is not running">

21 <java classname="org.hsqldb.util.DatabaseManager"

22 fork="yes">

23 <classpath refid="project.class.path"/>

24 <arg value="-driver"/>

25 <arg value="org.hsqldb.jdbcDriver"/>

26 <arg value="-url"/>

27 <arg value="jdbc:hsqldb:${data.dir}/music"/>

28 <arg value="-user"/>

29 <arg value="sa"/>

30 </java>

31 </target>

32

33 <!-- Teach Ant how to use Hibernate's code generation tool -->

34 <taskdef name="hbm2java"

35 classname="net.sf.hibernate.tool.hbm2java.Hbm2JavaTask"

36 classpathref="project.class.path"/>

37

38 <!-- Generate the java code for all mapping files in our source tree -->

39 <target name="codegen"

40 description="Generate Java source from the O/R mapping files">

41 <hbm2java output="${source.root}">

42 <fileset dir="${source.root}">

43 <include name="**/*.hbm.xml"/>

44 </fileset>

45 </hbm2java>

46 </target>

47

48 </project>




We added a taskdef (task definition) and a new target to the build file. The task definition at line 33 teaches Ant a new trick: it tells Ant how to use the hbm2java tool that is part of the Hibernate Extensions, with the help of a class provided for this purpose. Note that it also specifies the class path to be used when invoking this tool, using the project.class.path definition found earlier in the file.

The codegen target at line 38 uses the new hbm2java task to run Hibernate's code generator on any mapping documents found in the src tree, writing the corresponding Java source. The pattern '**/*.hbm.xml' means 'any file ending in .hbm.xml, within the specified directory, or any subdirectory, however deeply nested.'

Let's try it! From within your top-level project directory (the folder containing build.xml), type the following command:


ant codegen




You should see output like this:

Buildfile: build.xml

codegen:

[hbm2java] Processing 1 files.

[hbm2java] Building hibernate objects

[hbm2java] log4j:WARN No appenders could be found for logger (net.sf.

hibernate.util.DTDEntityResolver).

[hbm2java] log4j:WARN Please initialize the log4j system properly.


The warnings are griping about the fact that we haven't taken the trouble to set up the logging environment that Hibernate expects. We'll see how to do that in the next example. For now, if you look in the directory src/com/oreilly/hh, you'll see that a new file named Track.java has appeared, with the content shown in Example 2-3.

Example 2-3. Code generated from the Track mapping document

1 package com.oreilly.hh;

2

3 import java.io.Serializable;

4 import java.util.Date;

5 import org.apache.commons.lang.builder.EqualsBuilder;

6 import org.apache.commons.lang.builder.HashCodeBuilder;

7 import org.apache.commons.lang.builder.ToStringBuilder;

8

9 /**

10 * Represents a single playable track in the music database.

11 * @author Jim Elliott (with help from Hibernate)

12 *

13 */

14 public class Track implements Serializable {

15

16 /** identifier field */

17 private Integer id;

18

19 /** persistent field */

20 private String title;

21

22 /** persistent field */

23 private String filePath;

24

25 /** nullable persistent field */

26 private Date playTime;

27

28 /** nullable persistent field */

29 private Date added;

30

31 /** nullable persistent field */

32 private short volume;

33

34 /** full constructor */

35 public Track(String title, String filePath, Date playTime,

Date added, short volume) {

36 this.title = title;

37 this.filePath = filePath;

38 this.playTime = playTime;

39 this.added = added;

40 this.volume = volume;

41 }

42

43 /** default constructor */

44 public Track() {

45 }

46

47 /** minimal constructor */

48 public Track(String title, String filePath) {

49 this.title = title;

50 this.filePath = filePath;

51 }

52

53 public Integer getId() {

54 return this.id;

55 }

56

57 protected void setId(Integer id) {

58 this.id = id;

59 }

60

61 public String getTitle() {

62 return this.title;

63 }

64

65 public void setTitle(String title) {

66 this.title = title;

67 }

68

69 public String getFilePath() {

70 return this.filePath;

71 }

72

73 public void setFilePath(String filePath) {

74 this.filePath = filePath;

75 }

76

77 /**

78 * Playing time

79 */

80 public Date getPlayTime() {

81 return this.playTime;

82 }

83

84 public void setPlayTime(Date playTime) {

85 this.playTime = playTime;

86 }

87

88 /**

89 * When the track was created

90 */

91 public Date getAdded() {

92 return this.added;

93 }

94

95 public void setAdded(Date added) {

96 this.added = added;

97 }

98

99 /**

100 * How loud to play the track

101 */

102 public short getVolume() {

103 return this.volume;

104 }

105

106 public void setVolume(short volume) {

107 this.volume = volume;

108 }

109

110 public String toString() {

111 return new ToStringBuilder(this)

112 .append("id", getId())

113 .toString();

114 }

115

116 public boolean equals(Object other) {

117 if ( !(other instanceof Track) ) return false;

118 Track castOther = (Track) other;

119 return new EqualsBuilder()

120 .append(this.getId(), castOther.getId())

121 .isEquals();

122 }

123

124 public int hashCode() {

125 return new HashCodeBuilder()

126 .append(getId())

127 .toHashCode();

128 }

129

130 }




2.2.2 What just happened?
Ant found all files in our source tree ending in .hbm.xml (just one, so far) and fed it to the Hibernate code generator, which analyzed it, and wrote a Java class meeting the specifications we provided for the Track mapping.

NOTE


That can save a lot of time and fairly repetitive activity. I could get used to it.


You may find it worthwhile to compare the generated Java source with the mapping specification from which it arose (Example 2-1). The source starts out with the proper package declaration, which is easy for hbm2java to figure out from the fully qualified class name required in the mapping file. There are a couple of imports to make the source more readable. The three potentially unfamiliar entries (lines 5-7) are utilities from the Jakarta Commons project that help in the creation of correctly implemented and useful toString(), equals(), and hashCode() methods.

The class-level JavaDoc at line 10 should look familiar, since it comes right from the 'class-description' meta tag in our mapping document. The field declarations are derived from the id (line 17) and property (lines 20-32) tags defined in the mapping. The Java types used are derived from the property types in the mapping document. We'll delve into the full set of value types supported by Hibernate later on. For now, the relationship between the types in the mapping document and the Java types used in the generated code should be fairly clear.

One curious detail is that an Integer wrapper has been used for id, while volume is declared as a simple, unwrapped short. Why the difference? It relates to the fact that the ID/key property has many important roles to play in the O/R mapping process (which is why it gets a special XML tag in the mapping document, rather than being just another property). Although we left it out in our specification, one of the choices you need to make when setting up an ID is to pick a special value to indicate that a particular instance has not yet been saved into the database. Leaving out this unsaved-value attribute, as we did, tells Hibernate to use its default interpretation, which is that unsaved values are indicated by an ID of null. Since native int values can't be null, they must be wrapped in a java.lang.Integer, and Hibernate took care of this for us.

When it comes to the volume property, Hibernate has no special need or use for it, so it trusts us to know what we're doing. If we want to be able to store null values for volume, perhaps to indicate 'no change,' we need to explicitly use java.lang.Short rather than short in our mapping document. (Had we not been sneakily pointing out this difference, our example would be better off explicitly using java.lang.Integer in our ID mapping too, just for clarity.)

NOTE


I know, I'm a perfectionist. I only bother to pick nits because I think Hibernate is so useful!


Another thing you might notice about these field declarations is that their JavaDoc is quite generic—you may be wondering what happened to the 'field-description' meta tags we put in the mapping document for playTime, added and volume. It turns out they appear only later, in the JavaDoc for the getter methods. They are not used in the setters, the actual field declarations, nor as @param entries for the constructor. As an avid user of a code-completing Java editor, I count on pop-up JavaDoc as I fill in arguments to method calls, so I'm a little disappointed by this limitation. Of course, since this is an open source project, any of us can get involved and propose or undertake this simple fix. Indeed, you may find this already remedied by the time you read this book. Once robust field and parameter documentation is in place, I'd definitely advocate always providing a brief but accurate field-description entry for your properties.

After the field declarations come a trio of constructors. The first (line 35) establishes values for all properties, the second (line 44) allows instantiation without any arguments (this is required if you want the class to be usable as a bean, such as on a Java Server Page, a very common use for data classes like this), and the last (line 48) fills in just the values we've indicated must not be null. Notice that none of the constructors set the value of id; this is the responsibility of Hibernate when we get the object out of the database, or insert it for the first time.

Consistent with that, the setId() method on line 57 is protected, as requested in our id mapping. The rest of the getters and setters are not surprising; this is all pretty much boilerplate code (which we've all written too many times), which is why it's so nice to be able to have the Hibernate extensions generate it for us.

If you want to use Hibernate's generated code as a starting point and then add some business logic or other features to the generated class, be aware that all your changes will be silently discarded the next time you run the code generator. In such a project you will want to be sure the hand-tweaked classes are not regenerated by any Ant build target.






Even though we're having Hibernate generate our data classes in this example, it's important to point out that the getters and setters it creates are more than a nice touch. You need to put these in your persistent classes for any properties you want to persist, since Hibernate's fundamental persistence architecture is based on reflective access to Java- Beans™-style properties. They don't need to be public if you don't want them to; Hibernate has ways of getting at even properties declared protected or private, but they do need accessor methods. Think of it as enforcing good object design; the Hibernate team wants to keep the implementation details of actual instance variables cleanly separated from the persistence mechanism.



2.3 Cooking Up a Schema
That was pretty easy, wasn't it? You'll be happy to learn that creating database tables is a very similar process. As with code generation, you've already done most of the work in coming up with the mapping document. All that's left is to set up and run the schema generation tool.


Example 2-4. Setting up hibernate.properties
hibernate.dialect=net.sf.hibernate.dialect.HSQLDialect

hibernate.connection.driver_class=org.hsqldb.jdbcDriver

hibernate.connection.url=jdbc:hsqldb:data/music

hibernate.connection.username=sa

hibernate.connection.password=


In addition to establishing the SQL dialect we are using, this tells Hibernate how to establish a connection to the database using the JDBC driver that ships as part of the HSQLDB database JAR archive, and that the data should live in the data directory we've created—in the database named music. The username and empty password (indeed, all these values) should be familiar from the experiment we ran at the end of Chapter 1.

Notice that we're using a relative path to specify the database filename. This works fine in our examples—we're using ant to control the working directory. If you copy this for use in a web application or other environment, though, you'll likely need to be more explicit about the location of the file.






You can put the properties file in other places, and give it other names, or use entirely different ways of getting the properties into Hibernate, but this is the default place it will look, so it's the path of least resistance (or, I guess, least runtime configuration).

We also need to add some new pieces to our build file, shown in Example 2-5. This is a somewhat substantial addition, because we need to compile our Java source in order to use the schema generation tool, which relies on reflection to get its details right. Add these targets right before the closing </project> tag at the end of build.xml.

Example 2-5. Ant build file additions for compilation and schema generation

1 <!-- Create our runtime subdirectories and copy resources into them -->

2 <target name="prepare" description="Sets up build structures">

3 <mkdir dir="${class.root}"/>

4

5 <!-- Copy our property files and O/R mappings for use at runtime -->

6 <copy todir="${class.root}" >

7 <fileset dir="${source.root}" >

8 <include name="**/*.properties"/>

9 <include name="**/*.hbm.xml"/>

10 </fileset>

11 </copy>

12 </target>

13

14 <!-- Compile the java source of the project -->

15 <target name="compile" depends="prepare"

16 description="Compiles all Java classes">

17 <javac srcdir="${source.root}"

18 destdir="${class.root}"

19 debug="on"

20 optimize="off"

21 deprecation="on">

22 <classpath refid="project.class.path"/>

23 </javac>

24 </target>

25

26 <!-- Generate the schemas for all mapping files in our class tree -->

27 <target name="schema" depends="compile"

28 description="Generate DB schema from the O/R mapping files">

29

30 <!-- Teach Ant how to use Hibernate's schema generation tool -->

31 <taskdef name="schemaexport"

32 classname="net.sf.hibernate.tool.hbm2ddl.SchemaExportTask"

33 classpathref="project.class.path"/>

34

35 <schemaexport properties="${class.root}/hibernate.properties"

36 quiet="no" text="no" drop="no" delimiter=";">

37 <fileset dir="${class.root}">

38 <include name="**/*.hbm.xml"/>

39 </fileset>

40 </schemaexport>

41 </target>


First we add a prepare target that is intended to be used by other targets more than from the command line. Its purpose is to create, if necessary, the classes directory into which we're going to compile, and then copy any properties and mapping files found in the src directory hierarchy to corresponding directories in the classes hierarchy. This hierarchical copy operation (using the special '**/*' pattern) is a nice feature of Ant, enabling us to define and edit resources alongside to the source files that use them, while making those resources available at runtime via the class loader.

The aptly named compile target at line 14 uses the built-in java task to compile all the Java source files found in the src tree to the classes tree. Happily, this task also supports the project class path we've set up, so the compiler can find all the libraries we're using. The depends="prepare" attribute in the target definition tells Ant that before running the compile target, prepare must be run. Ant manages dependencies so that when you're building multiple targets with related dependencies, they are executed in the right order, and each dependency gets executed only once, even if it is mentioned by multiple targets.

If you're accustomed to using shell scripts to compile a lot of Java source, you'll be surprised by how quickly the compilation happens. Ant invokes the Java compiler within the same virtual machine that it is using, so there is no process startup delay for each compilation.

Finally, after all this groundwork, we can write the target we really wanted to! The schema target (line 26) depends on compile, so all our Java classes will be compiled and available for inspection when the schema generator runs. It uses taskdef internally at line 31 to define the schemaexport task that runs the Hibernate schema export tool, in the same way we provided access to the code generation tool at the top of the file. It then invokes this tool and tells it to generate the database schema associated with any mapping documents found in the classes tree.

There are a number of parameters you can give the schema export tool to configure the way it works. In this example (at line 35) we're telling it to display the SQL it runs so we can watch what it's doing (quiet="no"), to actually interact with the database and create the schema rather than simply writing out a DDL file we could import later or simply deleting the schema (text="no", drop="no"). For more details about these and other configuration options, consult the Hibernate reference manual.

You may be wondering why the taskdef for the schema update tool is inside our schema target, rather than at the top of the build file, next to the one for hbm2java. Well, I wanted it up there too, but I ran into a snag that's worth explaining. I got strange error messages the first time I tried to build the schema target, complaining there was no hibernate.properties on the class path and our compiled Track class couldn't be found. When I ran it again, it worked. Some detective work using ant -verbose revealed that if the classes directory didn't exist when the taskdef was encountered, Ant helpfully removed it from the class path. Since a taskdef can't have its own dependencies, the solution is to move it into the schema target, giving it the benefit of that target's dependencies, ensuring the classes directory exists by the time the taskdef is processed.






With these additions, we're ready to generate the schema for our TRACK table.

You might think the drop="no" setting in our schema task means you can use it to update the schema—it won't drop the tables, right? Alas, this is a misleading parameter name: it means it won't just drop the tables, rather it will go ahead and generate the schema after dropping them. Much as you want to avoid the codegen task after making any changes to the generated Java source, you mustn't export the schema if you've put any data into the database. Luckily, there is another tool you can use for incremental schema updates that works much the same way, as long as your JDBC driver is powerful enough. This SchemaUpdate tool can be used with an Ant taskdef too.






Because we've asked the schema export task not to be 'quiet,' we want it to generate some log entries for us. In order for that to work, we need to configure log4j, the logging environment used by Hibernate. The easiest way to do this is to make a log4j.properties file available at the root of the class path. We can take advantage of our existing prepare target to copy this from the src to the classes directory at the same time it copies Hibernate's properties. Create a file named log4j.properties in the src directory with the content shown in Example 2-6. An easy way to do this is to copy the file out of the src directory in the Hibernate distribution you downloaded, since it's provided for use by their own examples. If you're typing it in yourself, you can skip the blocks that are commented out; they are provided to suggest useful logging alternatives.

Example 2-6. The logging configuration file, log4j.properties
### direct log messages to stdout ###

log4j.appender.stdout=org.apache.log4j.ConsoleAppender

log4j.appender.stdout.Target=System.out

log4j.appender.stdout.layout=org.apache.log4j.PatternLayout

log4j.appender.stdout.layout.ConversionPattern=%d{ABSOLUTE} %5p %c{1}:%L - %m%n



### direct messages to file hibernate.log ###

#log4j.appender.file=org.apache.log4j.FileAppender

#log4j.appender.file.File=hibernate.log

#log4j.appender.file.layout=org.apache.log4j.PatternLayout

#log4j.appender.file.layout.ConversionPattern=%d{ABSOLUTE} %5p %c{1}:%L - %m%n



### set log levels - for more verbose logging change 'info' to 'debug' ###



log4j.rootLogger=warn, stdout

log4j.logger.net.sf.hibernate=info



### log just the SQL

#log4j.logger.net.sf.hibernate.SQL=debug



### log JDBC bind parameters ###

log4j.logger.net.sf.hibernate.type=info



### log schema export/update ###

log4j.logger.net.sf.hibernate.tool.hbm2ddl=debug



### log cache activity ###

#log4j.logger.net.sf.hibernate.cache=debug



### enable the following line if you want to track down connection ###

### leakages when using DriverManagerConnectionProvider ###

#log4j.logger.net.sf.hibernate.connection.DriverManagerConnectionProvider=trace




With the log configuration in place, you might want to edit the codegen target in build.xml so that it, too, depends on our new prepare target. This will ensure logging is configured whenever we use it, preventing the warnings we saw when first running it. As noted in the tip about class paths and task definitions in the previous section, though, to make it work the very first time you'll have to move the taskdef for hbm2java inside the codegen target, in the same way we put schemaexport inside the schema target.






Time to make a schema! From the project directory, execute the command ant schema . You'll see output similar to Example 2-7 as the classes directory is created and populated with resources, the Java source is compiled,[2.1] and the schema generator is run.

[2.1] We're assuming you've already generated the code shown in Example 2-3, or there won't be any Java source to compile, and the schema generation will fail. The schema target doesn't invoke codegen to automatically generate code, in case you've manually extended any of your generated classes.

Example 2-7. Output from building the schema using HSQLDB's embedded database server
% ant schema

Buildfile: build.xml



prepare:

[mkdir] Created dir: /Users/jim/Documents/Work/OReilly/Hibernate/Examples/

ch02/classes

[copy] Copying 3 files to /Users/jim/Documents/Work/OReilly/Hibernate/

Examples/ch02/classes



compile:

[javac] Compiling 1 source file to /Users/jim/Documents/Work/OReilly/

Hibernate/Examples/ch02/classes



schema:

[schemaexport] 23:50:36,165 INFO Environment:432 - Hibernate 2.1.1

[schemaexport] 23:50:36,202 INFO Environment:466 - loaded properties from

resource hibernate.properties: {hibernate.connection.username=sa, hibernate.

connection.password=, hibernate.cglib.use_reflection_optimizer=true, hibernate.

dialect=net.sf.hibernate.dialect.HSQLDialect, hibernate.connection.url=jdbc:

hsqldb:data/music, hibernate.connection.driver_class=org.hsqldb.jdbcDriver}

[schemaexport] 23:50:36,310 INFO Environment:481 - using CGLIB reflection

optimizer

[schemaexport] 23:50:36,384 INFO Configuration:166 - Mapping file: /Users/jim/

Documents/Work/OReilly/Hibernate/Examples/ch02/classes/com/oreilly/hh/Track.hbm.

xml

[schemaexport] 23:50:37,409 INFO Binder:225 - Mapping class: com.oreilly.hh.

Track -> TRACK

[schemaexport] 23:50:37,928 INFO Dialect:82 - Using dialect: net.sf.hibernate.

dialect.HSQLDialect

[schemaexport] 23:50:37,942 INFO Configuration:584 - processing one-to-many

association mappings

[schemaexport] 23:50:37,947 INFO Configuration:593 - processing one-to-one

association property references

[schemaexport] 23:50:37,956 INFO Configuration:618 - processing foreign key

constraints

[schemaexport] 23:50:38,113 INFO Configuration:584 - processing one-to-many

association mappings

[schemaexport] 23:50:38,124 INFO Configuration:593 - processing one-to-one

association property references

[schemaexport] 23:50:38,132 INFO Configuration:618 - processing foreign key

constraints

[schemaexport] 23:50:38,149 INFO SchemaExport:98 - Running hbm2ddl schema export

[schemaexport] 23:50:38,154 INFO SchemaExport:117 - exporting generated schema

to database

[schemaexport] 23:50:38,232 INFO DriverManagerConnectionProvider:41 - Using

Hibernate built-in connection pool (not for production use!)

[schemaexport] 23:50:38,238 INFO DriverManagerConnectionProvider:42 - Hibernate

connection pool size: 20

[schemaexport] 23:50:38,278 INFO DriverManagerConnectionProvider:71 - using

driver: org.hsqldb.jdbcDriver at URL: jdbc:hsqldb:data/music

[schemaexport] 23:50:38,283 INFO DriverManagerConnectionProvider:72 -connection

properties: {user=sa, password=}

[schemaexport] drop table TRACK if exists

[schemaexport] 23:50:39,083 DEBUG SchemaExport:132 - drop table TRACK if exists

[schemaexport] create table TRACK (

[schemaexport] TRACK_ID INTEGER NOT NULL IDENTITY,

[schemaexport] title VARCHAR(255) not null,

[schemaexport] filePath VARCHAR(255) not null,

[schemaexport] playTime TIME,

[schemaexport] added DATE,

[schemaexport] volume SMALLINT

[schemaexport] )

[schemaexport] 23:50:39,113 DEBUG SchemaExport:149 - create table TRACK (

[schemaexport] TRACK_ID INTEGER NOT NULL IDENTITY,

[schemaexport] title VARCHAR(255) not null,

[schemaexport] filePath VARCHAR(255) not null,

[schemaexport] playTime TIME,

[schemaexport] added DATE,

[schemaexport] volume SMALLINT

[schemaexport] )

[schemaexport] 23:50:39,142 INFO SchemaExport:160 - schema export complete

[schemaexport] 23:50:39,178 INFO DriverManagerConnectionProvider:137 - cleaning

up connection pool: jdbc:hsqldb:data/music



BUILD SUCCESSFUL

Total time: 10 seconds




Toward the end of the schemaexport section you can see the actual SQL used by Hibernate to create the TRACK table. If you look at the start of the music.script file in the data directory, you'll see it's been incorporated into the database. For a slightly more friendly (and perhaps convincing) way to see it, execute ant db to fire up the HSQLDB graphical interface, as shown in Figure 2-1.


Figure 2-1. The database interface with our new TRACK table expanded, and a query




2.3.2 What just happened?
We were able to use Hibernate to create a data table in which we can persist instances of the Java class it created for us. We didn't have to type a single line of SQL or Java! Of course, our table is still empty at this point. Let's change that! The next chapter will look at the stuff you probably most want to see: using Hibernate from within a Java program to turn objects into database entries and vice versa.

NOTE


It's about time? Yeah, I suppose. But at least you didn't have to figure out all these steps from scratch!


Before diving into that cool task, it's worth taking a moment to reflect on how much we've been able to accomplish with a couple of XML and properties files. Hopefully you're starting to see the power and convenience that make Hibernate so exciting.

2.4 Connecting Hibernate to MySQL

Example 2-8. Setting up the MySQL database notebook_db as a Hibernate playground
% mysql -u root -p

Enter password:

Welcome to the MySQL monitor. Commands end with ; or \g.

Your MySQL connection id is 764 to server version: 3.23.44-Max-log



Type 'help;' or '\h' for help. Type '\c' to clear the buffer.



mysql> CREATE DATABASE notebook_db;

Query OK, 1 row affected (0.00 sec)



mysql> GRANT ALL ON notebook_db.* TO jim IDENTIFIED BY "s3cret";

Query OK, 0 rows affected (0.20 sec)



mysql> quit;

Bye


NOTE


Hopefully you'll use a less guessable password than this in your real databases!


Make a note of the database name you create, as well as the username and password that can access to it. These will need to be entered into hibernate.properties, as shown in Example 2-9.

Next, you'll need a JDBC driver capable of connecting to MySQL. If you're already using MySQL for your Java projects, you'll have one. Otherwise, you can download Connector/J from www.mysql.com/downloads/api-jdbc-stable.html . However you obtain it, copy the driver library jar (which will be named something like mysql-connector-java-3.0.10-stable-bin.jar) to your project's lib directory alongside the HSQLDB, Hibernate, and other libraries that are already there. It's fine to have drivers for several different databases available to your code; they won't conflict with each other, since the configuration file specifies which driver class to use.

Speaking of which, it's time to edit hibernate.properties to use the new driver and database we've just made available. Example 2-9 shows how it is set up to connect to my MySQL instance using the database created in Example 2-8. You'll need to tweak these values to correspond to your own server, database, and the login credentials you chose. (If you're using MM.MySQL, the older incarnation of the MySQL JDBC driver, the driver_class will need to be com.mysql.jdbc.Driver.)

Example 2-9. Changes to hibernate.properties to connect to the new MySQL database
hibernate.dialect=net.sf.hibernate.dialect.MySQLDialect

hibernate.connection.driver_class=com.mysql.jdbc.Driver

hibernate.connection.url=jdbc:mysql://slant.reseune.pvt/notebook_db

hibernate.connection.username=jim

hibernate.connection.password=s3cret




The URL on the third line will need to reflect your server; you won't be able to resolve my private internal domain name, let alone route to it.

Once this is all set, you can rerun the schema creation example that was set up in the previous section. This time it will build the schema on your MySQL server rather than in the embedded HSQLDB world. You'll see output like that in Example 2-10.

Example 2-10. Schema creation when connecting to MySQL
% ant schema

Buildfile: build.xml



prepare:



compile:



schema:

[schemaexport] 23:02:13,614 INFO Environment:462 - Hibernate 2.1.2

[schemaexport] 23:02:13,659 INFO Environment:496 - loaded properties from

resource hibernate.properties: {hibernate.connection.username=jim, hibernate.

connection.password=s3cret, hibernate.cglib.use_reflection_optimizer=true,

hibernate.dialect=net.sf.hibernate.dialect.MySQLDialect, hibernate.connection.

url=jdbc:mysql://slant.reseune.pvt/notebook_db, hibernate.connection.driver_

class=com.mysql.jdbc.Driver}

[schemaexport] 23:02:13,711 INFO Environment:519 - using CGLIB reflection

optimizer

[schemaexport] 23:02:13,819 INFO Configuration:166 - Mapping file: /Users/jim/

Documents/Work/OReilly/Hibernate/Examples/ch02/classes/com/oreilly/hh/Track.hbm.xml

[schemaexport] 23:02:15,568 INFO Binder:229 - Mapping class: com.oreilly.hh.

Track -> TRACK

[schemaexport] 23:02:16,164 INFO Dialect:82 - Using dialect: net.sf.hibernate.

dialect.MySQLDialect

[schemaexport] 23:02:16,175 INFO Configuration:595 - processing one-to-many

association mappings

[schemaexport] 23:02:16,188 INFO Configuration:604 - processing one-to-one

association property references

[schemaexport] 23:02:16,209 INFO Configuration:629 - processing foreign key

constraints

[schemaexport] 23:02:16,429 INFO Configuration:595 - processing one-to-many

association mappings

[schemaexport] 23:02:16,436 INFO Configuration:604 - processing one-to-one

association property references

[schemaexport] 23:02:16,440 INFO Configuration:629 - processing foreign key

constraints

[schemaexport] 23:02:16,470 INFO SchemaExport:98 - Running hbm2ddl schema export

[schemaexport] 23:02:16,488 INFO SchemaExport:117 - exporting generated schema

to database

[schemaexport] 23:02:16,543 INFO DriverManagerConnectionProvider:41 - Using

Hibernate built-in connection pool (not for production use!)

[schemaexport] 23:02:16,549 INFO DriverManagerConnectionProvider:42 - Hibernate

connection pool size: 20

[schemaexport] 23:02:16,583 INFO DriverManagerConnectionProvider:71 - using

driver: com.mysql.jdbc.Driver at URL: jdbc:mysql://slant.reseune.pvt/notebook_db

[schemaexport] 23:02:16,597 INFO DriverManagerConnectionProvider:72 -connection

properties: {user=jim, password=s3cret}

[schemaexport] drop table if exists TRACK

[schemaexport] 23:02:18,129 DEBUG SchemaExport:132 - drop table if exists TRACK

[schemaexport] create table TRACK (

[schemaexport] TRACK_ID INTEGER NOT NULL AUTO_INCREMENT,

[schemaexport] title VARCHAR(255) not null,

[schemaexport] filePath VARCHAR(255) not null,

[schemaexport] playTime TIME,

[schemaexport] added DATE,

[schemaexport] volume SMALLINT,

[schemaexport] primary key (Track_id)

[schemaexport] )

[schemaexport] 23:02:18,181 DEBUG SchemaExport:149 - create table TRACK (

[schemaexport] TRACK_ID INTEGER NOT NULL AUTO_INCREMENT,

[schemaexport] title VARCHAR(255) not null,

[schemaexport] filePath VARCHAR(255) not null,

[schemaexport] playTime TIME,

[schemaexport] added DATE,

[schemaexport] volume SMALLINT,

[schemaexport] primary key (Track_id)

[schemaexport] )

[schemaexport] 23:02:18,311 INFO SchemaExport:160 - schema export complete

[schemaexport] 23:02:18,374 INFO DriverManagerConnectionProvider:137 - cleaning

up connection pool: jdbc:mysql://slant.reseune.pvt/notebook_db



BUILD SUCCESSFUL

Total time: 9 seconds



Example 2-11. Checking the newly created MySQL schema
% mysql -u jim -p

Enter password:

Welcome to the MySQL monitor. Commands end with ; or \g.

Your MySQL connection id is 772 to server version: 3.23.44-Max-log



Type 'help;' or '\h' for help. Type '\c' to clear the buffer.



mysql> USE notebook_db

Database changed

mysql> SHOW TABLES;

+-----------------------+

| Tables_in_notebook_db |

+-----------------------+

| TRACK |

+-----------------------+

1 row in set (0.03 sec)



mysql> DESCRIBE TRACK;

+--------------+---------------+-------+-------+----------+----------------+

| Field | Type | Null | Key | Default | Extra |

+--------------+---------------+------ +-------+----------+----------------+

| TRACK_ID | int(11) | | PRI | NULL | auto_increment |

| title | varchar(255) | | | | |

| filePath | varchar(255) | | | | |

| playTime | time | YES | | NULL | |

| added | date | YES | | NULL | |

| volume | smallint(6) | YES | | NULL | |

+--------------+---------------+-------+-------+----------+----------------+

6 rows in set (0.02 sec)



mysql> SELECT * FROM TRACK;

Empty set (0.00 sec)



mysql> quit;

Bye


It's not surprising to find the table empty. We'll investigate how to populate it with data in the first part of Chapter 3.

If you've followed this example and set up a MySQL database, and you'd prefer to continue working with it throughout the rest of the book, feel free to do so, but bear in mind you'll need to know how to look at the results of the examples yourself. The text will assume you're still working with HSQLDB, and it will show you how to check your progress in that context. You will also see slight differences in the schema, as databases all have slightly different column types and features. Apart from these minor details, it really makes no difference what database you're using—that's part of the appeal of an O/R mapping layer like Hibernate.

No comments:

Post a Comment