Protege4ClientServer

From Protege Wiki
Revision as of 18:07, September 29, 2012 by Tredmond (talk | contribs) (The Local Copy/Sandbox)

Jump to: navigation, search

Introduction

These are some pages under development to document the Protege 4 Server. We are hoping to release an early alpha soon.

What is it?

The Protege OWL Server provides a platform for collaborative editing and version control of a collection of ontologies. The Protege server tracks changes made to its ontologies, enforces an access control policy for its documents and checks for conflicts between its clients. When used with the Protege client, ontology editors can view and modify a shared ontology in parallel. If a editor chooses, the editor can watch changes made by other editors as they occur. To change an ontology, an editor first makes the changes in his local copy of the ontology. When he is happy with his changes, he can commit them making them available to other editors of the ontology. Alternatively, an editor making changes to his local copy can save his copy of the changes and commit them in a later session.

In addition, the Protege OWL Server can be used as something more like a simple version control system. We are developing a set of command line tools that will be able to use a Protege OWL Server to provide such traditional version control services as checkn, checkout, update, commit and history query commands. The Protege 4 client can be used in this manner as well: an ontology editor can choose not to turn on auto-update and make all his updates and commits manually.

Comparison with the Protege 3 Server

There are several differences between the Protege 3 Server and the Protege 4 Server:

  • The local copy. In Protege 3, when a client connect to the server, any change made to the client is immediately reflected on the server. In Protege 4, in contrast, changes only get propagated to the Protege server when the user commits the change. This allows a user of a Protege client to consider his changes before sending the changes to the server. This is a significant enough concept that we describe it in more detail below.
  • Decoupled client-server. In Protege 3 when the server goes down or the network is interrupted, the Protege 3 client either freezes or crashes. In contrast, in Protege 4, if the server stops or is inaccessible, the Protege client continues running normally. It is only when some server operation is attempted, such as an update or commit, that the user may become aware that there is a problem communicating with the server.
  • Commit granularity. In Protege 3, changes are sent to the server as they are made. In Protege 4 a collection of changes are only committed when the user is ready the user is able to add a commit comment describing the nature of the changes.
  • Optional automatic update. In Protege 3, a user sees edits from other users as they occur. In Protege 4, this is optional.

The Local Copy/Sandbox

With the Protege 4 client server, when a user checks an ontology out from the server, he gets a separate copy of the server ontology. The user can then modify this copy in any way that he likes and the changes will not go to the server until the user commits the changes.

Videos

Here is the first of several videos that I am going to make to demonstrate server features:

Performance considerations for large ontologies on a slow network

Installation details

Hopefully at the point of the release we will have automated the installation process and these pages will be for advanced users.

The purpose of these pages is to allow people to know what is installed by the server installation and what options the user can change. The installation setup is slightly different depending on whether the operating system in question is Linux, OS X or Windows. The Protege Server should run on other platforms as well though we don't yet support its installation. The key things that need to be figured out for an installation to some other platform is obtaining a version of Java that is at least Java 1.6 and determining how to make the Protege 4 Server start at boot time.

Linux

On a linux system the following files and directories are created:

  • /usr/local/protege which contains the core server installation in the subdirectory server, the users data files in the subdirectory data and some command line utilities in the bin subdirectory.
  • /etc/init.d/protege which is a script that ensures that the Protege server is started at boot time.
  • /etc/default/protege which is a properties file that configures the init.d script above. The user will have to modify this file before the server will run correctly.
  • /etc/rc#.d/K20protege which are a symbolic links to /etc/init.d/protege for '#=0,1,6'. These scripts ensure that the Protege Server is correctly shutdown when the computer stopped. In particular, if the sever has unsaved files (a temporary condition in any case) this script give Protege some time to save the files before the system exits. The best way to configure these files is through the update-rc.d script as explained below.
  • /etc/rc#.d/S20protege which are symbolic links to /etc/init.d/protege for '#=2,3,4,5'. These scripts ensure that the Protege Server is running at system startup. The best way to configure these files is through the update-rc.d script as explained below.

When the protege server runs, it will write some system logs to the /var/log/protege directory. <p/> An example/etc/default/protege file looks like this:

#
# This file goes into /etc/default/protege and holds the default
# settings for the Protege Server.
#

HOSTNAME=`hostname`
PROTEGE_SERVER_PREFIX=/usr/local/protege
PROTEGE_SANDBOX_USER=tredmond
JAVA_CMD=/usr/local/java/jdk1.7.0_06/bin/java
PID=/var/log/protege/PID

The HOSTNAME property tells the Protege 4 server how to advertise itself to the world. On a well-configured desktop or server machine that is not hidden by NAT the given setting will probably usually work. If not an IP address works fine. The sandbox user parameter is important and must be changed. This is a user account on the system that is set aside to run the server. Ideally it would be a user account that has minimal access to the system as a whole excepting write access to the /usr/local/protege/data directory. <p/> To configure the /etc/rc#.d scripts first install the protege script into the /etc/init.d directory. Then run the sudo update-rc.d protege defaults command and the results should look something like this:

Neptune:init.d% sudo update-rc.d protege defaults
 Adding system startup for /etc/init.d/protege ...
   /etc/rc0.d/K20protege -> ../init.d/protege
   /etc/rc1.d/K20protege -> ../init.d/protege
   /etc/rc6.d/K20protege -> ../init.d/protege
   /etc/rc2.d/S20protege -> ../init.d/protege
   /etc/rc3.d/S20protege -> ../init.d/protege
   /etc/rc4.d/S20protege -> ../init.d/protege
   /etc/rc5.d/S20protege -> ../init.d/protege
Neptune:init.d% 

The Server can be stopped with the command

               sudo /etc/init.d/protege stop

The Server can be started with the command

               sudo /etc/init.d/protege start

The Server can be restarted with the command

               sudo /etc/init.d/protege restart

The following command will make a first cut estimation of the status of the server:

               sudo /etc/init.d/protege status

OS X

On an OS X system the following files and directories are created:

  • /usr/local/protege which contains the core server installation in the subdirectory server, the users data files in the subdirectory data and some command line utilities in the bin subdirectory.
  • /Library/LaunchDaemons/org.protege.owl.server.plist which is a launchctl file to ensure that the Protege Server starts at boot time. This file must be editted in order for the Protege Server to be automatically started.

When the protege server runs, it will write some system logs to the /var/log/protege directory. <p/> The launchctl file is as follows org.protege.owl.server.plist

The two options that must be changed are circled in the above diagram. As in the Linux case, the username is the user under which the Protege server runs. Ideally this user has minimal access to the system as a whole except for write access to the /usr/local/protege/data and /var/log/protege directories.

The server can be restarted with the command

       sudo launchctl stop org.protege.owl.server

It will restart immediately after stopping.

Windows

To be determined.