Monday, July 27, 2009

Achieving optimal Authoring performance in v6 of Web Content Management

How can you achieve optimal performance for Authoring in Web Content Management version 6? If you do not tune the system, users can experience slow Authoring performance, which can slow down the content creation process. Performance degradation can manifest itself during save or delete operations, during UI navigation, and during other key Web Content Management functions.

As with any application, Web Content Management needs to be performance tuned for each specific customer environment and implementation.

Achieving optimal Authoring or content creation performance is a must for all Web Content Management customers. Below we have captured some key steps that all customers should implement to ensure from that the Web Content Management application is running optimally.

Recommended performance tuning parameters and guidelines:

1. Consult the Web Content Management 6.0 Best Practices guide
2. Consider Authoring Template design
3. Apply recommended fixes
4. Consult the Web Content Management Specific 6.0 Tuning Guide
5. Consult the WebSphere Portal v6.0 Tuning Guide
6. Tune the Web Content Management database within JCR
7. Disable automatic versioning

1. Consult the Web Content Management 6.0 Best Practices guide

The article "Best practices for using IBM Workplace Web Content Management V6" is available on the developerWorks Web site at the following address:

This guide also contains tuning recommendations. You should perform all tuning steps, taking note of the DB2 collation settings (if using DB2) and recommended caching and optimization steps.

NOTE: The Performance section includes Do's and Don'ts within Web Content Management as well as key parameters for general Web Content Management use and DB2 use. The best practices guide should be used in conjunction with this technote and the Information Center to gain or maintain optimal authoring performance.

2. Consider Authoring Template design

A well-designed Authoring Template is a prerequisite to achieving optimal Web Content Management authoring performance. Performance of various authoring actions varies significantly based on these early design decisions. Special consideration must be given to choosing the number of elements for your Authoring Template Content Prototype. Each element in Content prototype creates a separate node in the Web Content Management repository and creates additional overhead when performing authoring actions. For optimal performance of authoring actions, it is recommended to limit the number of elements to 10 - 15 elements. Having templates larger than this will affect performance of key authoring functions.

NOTE: If there is a requirement to have large number of elements in the content prototype, they should be normalized across multiple Authoring Templates.

3. Apply recommended fixes

You might need to apply key code changes packaged as fixes (also called ifixes for interim fixes) to your current version for both Web Content Management (WCM) and the Java Content Repository (JCR). To identify the recommended fixes for the v6.0.x versions of WCM and JCR, refer to the following documents: Recommended fixes for Web Content Management performance and syndication

6.0.1 and Recommended Fixes for Web Content Management (WCM) versions 6.0.1 and

NOTE: The recommended fixes for v601, v6003, and v6001 are included in v6011 and v6004.

4. Consult the Web Content Management Specific 6.0 Tuning Guide

The "IBM WebSphere Portal Version 6.0 Web Content Management Tuning Guide Document version 1.0" is available at the following address:

5. Consult the WebSphere Portal v6.0 Tuning Guide

The "IBM WebSphere Portal Version 6.0 Tuning Guide" is available at the following address:

NOTE: A section for Oracle specific tuning has been added.

6. Tune the Web Content Management database within JCR

The JCR Database absolutely must be tuned. To do so, follow the Portal tuning documentation in the WebSphere Portal Information Center, the v6 Best Practices Guide, the WebSphere Portal tuning guide as well as the Web Content Management specific tuning guide. The Web addresses for these resources are all listed above.

For DB2 server Only:

Make sure to Run Statistics on the Web Content Management database within DB2. In the Web Content Management tuning guide, we have provided a more robust reorg command than the basic reorg commands. An excerpt follows:

"We have determined a technique that has the same convenience of the reorgchk command and provides the detailed statistics preferred by the optimizer.

db2 -x -r "runstats.db2" "select rtrim(concat('runstats on table ',concat(rtrim(tabSchema),concat('.',concat(rtrim(tabname),' on all columns with distribution on all columns and sampled detailed indexes all allow write access'))))) from syscat.tables where type='T'"

db2 -v -f "runstats.db2"

The first command is used to create a file, runstats.db2, which contains all of the runstats commands for all of the tables. The second command uses the db2 command processor to run these commands. "

7. Disable automatic versioning

An additional tuning recommendation is to disable automatic versioning of objects on save (or publish) where possible. This step can have an impact on save and publish performance, and therefore should only be used on object types where it is required.

The following configuration options have been added to the \wcm\shared\app\config\wcmservices\ file to control versioning:


Valid values are as follows:

* Never - To disable versioning for the specified object type

* Always - To enable versioning for the specified object type

* Manual - To enable manual versioning for the specified object type. Typically used for Content objects. Note that the "manual" option adds a Save & Version command option to authoring UI so that authors can create versions manually.

NOTE: If a setting (for example, versioningStrategy.Content) is not specified, then the option will default to 'Always'.

Saturday, July 25, 2009

Alternatives for fixing unchecked redirect vulnerabilities

Unchecked redirect vulnerabilities are annoying to fix for our customers. Sometimes the developers need to link to a constantly changing selection of partners and they always have to support different redirect URLs for testing, integration, and production. Sometimes these redirect mechanisms span different applications even though they live on the same domain, too. Given the unstable nature of the “targets” and the cross-application centralization of these redirect mechanisms, we need some smarter alternatives.

What we’ve been recommending customers do to accommodate this target flux allows them to maintain a dynamic target without putting them at risk to phishing attacks. There are a number of creative solutions, and if you’ve got any more please comment:

1. Change the functionality to use POST instead of GET, and require a POST before redirecting. The attacker can’t force your browser to issue a POST without bouncing you off an intermediary, evil site. And if they do that, they could just redirect you to the phishing page directly anyway.
2. Set the target of the redirect in a cookie and let the “bounce” functionality read it from there. The attacker can’t force your browser to send arbitrary cookies with cross-site requests, so you’re safe with this technique.
3. Symmetrically encrypt the contents of the redirect target. You can still have a constantly-in-flux list of redirect targets and still maintain assurance that attackers can’t abuse your functionality for phishing.
4. Set the target of the redirect in a session variable and let the “bounce” functionality read it from there. The attacker can’t populate a victim’s session variables without abusing another vulnerability. For some redirect scenarios this may simply shift the dynamic work somewhere else but at least at that point you have architectural enforcement of your security mechanism.=

Single Sign-On fails with SiteMinder due to incorrect agent group settings

After configuring eTrust SiteMinder for WebSphere Portal, you still get prompted for the portal server login after authenticating via the SiteMinder login feature.

TAI not added to agent group in SiteMinder

Resolving the problem

Check the Agent Groups section via the SiteMinder Administration Console. Agent groups can be specified which allow you to add multiple TAIs into one SSO policy so that you aren't required to set up one policy for every server. If you fail to add the relevant TAI to the desired agent group, SSO can fail even after following the configuration steps in the WebSphere Portal Information Center.


Window useful commands

* && - Command Chaining
* %SYSTEMROOT%\System32\rcimlby.exe -LaunchRA - Remote Assistance (Windows XP)
* appwiz.cpl - Programs and Features (Formerly Known as "Add or Remove Programs")
* appwiz.cpl @,2 - Turn Windows Features On and Off (Add/Remove Windows Components pane)
* arp - Displays and modifies the IP-to-Physical address translation tables used by address resolution protocol (ARP)
* at - Schedule tasks either locally or remotely without using Scheduled Tasks
* bootsect.exe - Updates the master boot code for hard disk partitions to switch between BOOTMGR and NTLDR
* cacls - Change Access Control List (ACL) permissions on a directory, its subcontents, or files
* calc - Calculator
* chkdsk - Check/Fix the disk surface for physical errors or bad sectors
* cipher - Displays or alters the encryption of directories [files] on NTFS partitions
* cleanmgr.exe - Disk Cleanup
* clip - Redirects output of command line tools to the Windows clipboard
* cls - clear the command line screen
* cmd /k - Run command with command extensions enabled
* color - Sets the default console foreground and background colors in console
* - Default Operating System Shell
* compmgmt.msc - Computer Management
* control.exe /name Microsoft.NetworkAndSharingCenter - Network and Sharing Center
* control keyboard - Keyboard Properties
* control mouse(or main.cpl) - Mouse Properties
* control sysdm.cpl,@0,3 - Advanced Tab of the System Properties dialog
* control userpasswords2 - Opens the classic User Accounts dialog
* desk.cpl - opens the display properties
* devmgmt.msc - Device Manager
* diskmgmt.msc - Disk Management
* diskpart - Disk management from the command line
* dsa.msc - Opens active directory users and computers
* dsquery - Finds any objects in the directory according to criteria
* dxdiag - DirectX Diagnostic Tool
* eventvwr - Windows Event Log (Event Viewer)
* explorer . - Open explorer with the current folder selected.
* explorer /e, . - Open explorer, with folder tree, with current folder selected.
* F7 - View command history
* find - Searches for a text string in a file or files
* findstr - Find a string in a file
* firewall.cpl - Opens the Windows Firewall settings
* fsmgmt.msc - Shared Folders
* fsutil - Perform tasks related to FAT and NTFS file systems
* ftp - Transfers files to and from a computer running an FTP server service
* getmac - Shows the mac address(es) of your network adapter(s)
* gpedit.msc - Group Policy Editor
* httpcfg.exe - HTTP Configuration Utility
* iisreset - To restart IIS
* InetMgr.exe - Internet Information Services (IIS) Manager 7
* InetMgr6.exe - Internet Information Services (IIS) Manager 6
* intl.cpl - Regional and Language Options
* ipconfig - Internet protocol configuration
* lusrmgr.msc - Local Users and Groups Administrator
* msconfig - System Configuration
* notepad - Notepad? ;)
* mmsys.cpl - Sound/Recording/Playback properties
* mode - Configure system devices
* more - Displays one screen of output at a time
* mrt - Microsoft Windows Malicious Software Removal Tool
* mstsc.exe - Remote Desktop Connection
* nbstat - displays protocol statistics and current TCP/IP connections using NBT
* ncpa.cpl - Network Connections
* netsh - Display or modify the network configuration of a computer that is currently running
* netstat - Network Statistics
* net statistics - Check computer up time
* net stop - Stops a running service.
* net use - Connects a computer to or disconnects a computer from a shared resource, or displays information about computer connections
* odbcad32.exe - ODBC Data Source Administrator
* pathping - A traceroute that collects detailed packet loss stats
* ping - Determine whether a remote computer is accessible over the network
* powercfg.cpl - Power management control panel applet
* quser - Display information about user sessions on a terminal server
* qwinsta - See disconnected remote desktop sessions
* reg.exe - Console Registry Tool for Windows
* regedit - Registry Editor
* rasdial - Connects to a VPN or a dialup network
* robocopy - Backup/Restore/Copy large amounts of files reliably
* rsop.msc - Resultant Set of Policy (shows the combined effect of all group policies active on the current system/login)
* runas - Run specific tools and programs with different permissions than the user's current logon provides
* sc - Manage anything you want to do with services.
* schtasks - Enables an administrator to create, delete, query, change, run and end scheduled tasks on a local or remote system.
* secpol.msc - Local Security Settings
* services.msc - Services control panel
* set - Displays, sets, or removes cmd.exe environment variables.
* set DIRCMD - Preset dir parameter in cmd.exe
* start - Starts a separate window to run a specified program or command
* start. - opens the current directory in the Windows Explorer.
* shutdown.exe - Shutdown or Reboot a local/remote machine
* subst.exe - Associates a path with a drive letter, including local drives
* systeminfo -Displays a comprehensive information about the system
* taskkill - terminate tasks by process id (PID) or image name
* tasklist.exe - List Processes on local or a remote machine
* taskmgr.exe - Task Manager
* telephon.cpl - Telephone and Modem properties
* timedate.cpl - Date and Time
* title - Change the title of the CMD window you have open
* tracert - Trace route
* wmic - Windows Management Instrumentation Command-line
* winver.exe - Find Windows Version
* wscui.cpl - Windows Security Center
* wuauclt.exe - Windows Update AutoUpdate Client

How to use the WCM API to retrieve content element information such as image component file size

You would like to use the IBM® Web Content Management (WCM) API to access a content item's element and retrieve information. How can you retrieve the size of an image file stored in an image component?

The WCM API provides methods which allow access to a content item's elements, such as an image component.
Summary: How to access a content's element.

In this example, we retrieve the Image Component for a piece of WCM content:

1. Acquire the user workspace.
2. Set the current document library.
3. Build the document iterator (find your content)
4. Acquire the content's document id.
5. Use the ContentComponent getComponent(java.lang.String name) method to acquire the content element.
6. Use the ImageComponent getImageFileName() method to retrieve the image file name.
7. Use the ImageComponent getImage() method to retrieve the image file and set the image file as a Java byte array.
8. Use the Java byte array length value to determine the image file size in bytes.

**Note:** This example retrieves the image file size in bytes.

Example code retrieves a content's image component's image file size in bytes.

//finds the document id of the content items that match by name
DocumentIdIterator docIdIterator = ws.findByName(DocumentTypes.Content, "testcontent");
DocumentId docId;
Content currentContent;

//loops through the document id's found in the iterator
docId = (DocumentId);

//get the current content item
currentContent = (Content)ws.getById(docId);

//standard out log message
System.out.println("Log: Testing WCM API: Retrieved content name = "
+ (String)currentContent.getName());

//get the content's image component element by name
ContentComponent myCmpnt = (ContentComponent) currentContent.getComponent("MyImage");

//standard out log message
System.out.println("Log: Testing WCM API: My image component name = "
+ (String)myCmpnt.getName());

//Use the instanceof method to confirm the component type.
if (myCmpnt instanceof ImageComponent)
//Cast the ContentComponent to an ImageComponent
ImageComponent imageCmpnt;
imageCmpnt = (ImageComponent)myCmpnt;

//standard out log message
System.out.println("Log: Testing WCM API: My image component: File name = "
+ imageCmpnt.getImageFileName() );

//Get the image file stored in the current content's image component.
byte[] imageFileSize = imageCmpnt.getImage();

//Get the image file size and print it to the System out.
System.out.println("Log: Testing WCM API: My image component size in bytes: "
+ imageFileSize.length );
}// end if statement

}//end while

Friday, July 24, 2009

Best Practice: Catching and re-throwing Java Exceptions

What is the correct Java™ programming best practice to catch, print, and re-throw Java exceptions?

Problem determination is often hampered by mysterious errors, misleading information, or missing stack traces.

It is a well-known best practice that a Java application should not suppress caught exceptions with blank catch blocks; however, there are more subtle mistakes that Java applications make which can hamper problem determination. Here are 3 malpractices:

// #1. Worst -- there is no indication that an exception
// occurred and processing continues.
try {
// do work
} catch (Throwable t) {

// #2. Very Bad -- there is an indication that an
// exception occurred but there is no stack trace, and
// processing continues.
try {
// do work
} catch (Throwable t) {
System.err.println("There was a problem " + t.getMessage());

// #3. Incorrect. The stack trace of the original
// exception is lost. In the case of an Exception such as
// a NullPointerException, getMessage() will return a
// blank string, so there will be little indication of
// the problem.
try {
// do work
} catch (Throwable t) {
throw new ServletException("AUDIT ABC: " + t.getMessage());

The problem with #3 is that the ServletException will be shown in SystemOut.log but the stack trace and message will simply point to the ServletException which was created within the catch block. The true root problem is the caught exception, t, which has been lost because of a lack of a call to t.printStackTrace().

The correct way to catch and re-throw an exception is to pass the caught exception object as the "rootCause" or inner exception parameter to the constructor of the new exception (note that not all exception constructors support inner exceptions, in which case such an exception should not be used in such a case). When the exception is later caught and printed to SystemOut.log, the inner exception will be included:

// #4. Correct.
try {
// do work
} catch (Throwable t) {
throw new ServletException("AUDIT ABC: " + t.getMessage(), t);

// #5. correct.
try {
// do work
} catch (Throwable t) {
try {
// Perform some application logging or auditing
} catch (Throwable tAppDebug) {
throw t;

Customers often have general catch blocks in Servlets, MDBs, EJBs and other core components where they catch all un-handled exceptions and re-throw them as new Exceptions, adding application specific debugging information or auditing information. Exception handling malpractices such as those described above have been a source of many major customer outages.

Finally, there is a case where a developer is "stuck" catching an exception that cannot be re-thrown (For example: "throw t") because the method signature does not allow it, such as a restricted list of checked exceptions. In this case, a developer may throw a runtime exception, which is unchecked, although it should be clear that this should be the last option used. This is a hack around the underpinnings of Java's exception model, and it should have a proper fix through architectural changes of the code:

// #6. A hack but it is better than suppression.
try {
// do work
} catch (Throwable t) {
throw new Error(t);

Wednesday, July 22, 2009

The Portal Scripting Interface

One of the great advantages of the WebSphere software platform is that it's been built with a great deal of flexibility. A product simply wouldn't bear the WebSphere name if there weren't several different ways to do things. WebSphere Portal Server is no exception. With the release of version 5.1 IBM has added another way to administer the configuration of the Portal. This is sure to delight the poor, overworked Portal administrator who doesn't want to learn the art of XMLAccess and wants to avoid the use of a Web-based administration interface all costs.

This new feature, named the Portal Scripting Interface, lets the portal administrator configure the system via the command line. It's an extension of the wsadmin command-line interface for WebSphere Application Server and so it uses a similar syntax including the ability to take JACL script files as input (hooray!). Believe me, this is an advance that many Portal professionals have been waiting for ever since the introduction of the wsadmin tool for AppServer.

The end result is an interface that automates portal admin tasks and eases the burden of making minute changes to nested Portal objects or transferring new portal configurations from a developer workstation.

The Portal Scripting Interface, which I'll call PSI from now on (not to be confused with pounds per square inch or some reference to psionic powers), is invoked from within the WPS_HOME\bin directory. The syntax would normally look like this:


But of course it really isn't that simple. There are some implied parameters in that command. The first one is the connection type, or conntype. By default this value is SOAP, which indicates that the interface should connect to the Portal via the SOAP protocol. Another possible value could be RMI indicating that the interface should connect over IIOP. A third possible value could be NONE indicating that only the command shell should be launched and not explicitly connected to any running instance of the Portal (and not very useful for administering the portal).

The second implied parameter is the port on which to connect to the Portal. If you're using the default SOAP connection type, then the default value for the port parameter is 8882. In a Network Deployment configuration, you'd want to use the default port 8879 to make this connection. The SOAP connection port value of the server you're attempting to connect to can be viewed in the WebSphere Administration Console under Application Servers>WebSphere_Portal>End Points>SOAP Connector Address.

So an explicit string for launching the tool would look like this:

WPS_HOME\bin\wpscript.bat - conntype SOAP -port 8882

As you might expect, when WebSphere Security is enabled for the Portal, proper security credentials have to be supplied:

WPS_HOME\bin\wpscript.bat -conntype SOAP -port 8882 -
user wpsadmin -password password

Once the PSI has been launched, you must actually log into the Portal you're attempting to administer. This command, executed in the PSI, uses the syntax of the underlying wsadmin interface for AppServer. Familiarity with JACL or wsadmin would help at this point but it isn't necessary. Suffice it to say that commands are entered in a hierarchical format. They simply represent underlying Beans that are being invoked to do particular tasks. For our Portal login command, we have to invoke the Portal bean. After invoking PSI log into the Portal with:

$Portal login wpsadmin password

Congratulations! You're now connected to the Portal and ready to issue administration commands.

If you're using the new virtual portal feature of WPS 5.1, you can log into your virtual portal using a sub-command of the Portal bean. Let's say your virtual portal URI is /wps/myportal/blueportal (where the "blueportal" part of the URI indicates the name of the virtual portal), then the following commands

$Portal setvp blueportal
$Portal login wpsadmin password

will get you logged into the desired virtual portal.

Get Help Fast
Each of the beans available in this tool have help options. If I wanted to get a list of all of the available help options for the Portal bean, I could simply type

$Portal help

This would return the top-level list of help for the Portal bean. If I was more curious about just the login method of the Portal bean (which we used to log into the portal), I could type

$Portal help login

This would return help information specific to that method. The available beans in the PSI are:


Experiment with the help function on each of them to gain a better idea of the hierarchical structure of this interface.

Work Those Index Paths!
Let's say I have a portal page hierarchy that looks like this:

Content Root
My Portal (label, uniquename:wps.myportal)
Home (page, uniquename:wps.myportal.home)
Corporate Directory (page,uniquename: wps.myportal.CorpDir)
WorkPlace (label, uniquename: wps.myportal.WorkPlace)
Email (page, uniquename: wps.myportal.WorkPlace.Email)
Docs (page, uniquename:wps.myportal.WorkPlace.Docs)

Let's think about this content-node hierarchy. If you were to think about the page structure in our example, you could assign some hierarchical values to the objects. For instance, the Content-Root we could say is at the root location of the tree, or simply /. If this were the case, then we could say that the My Portal content-node is one level down in the tree. Sort of like a directory off the root filesystem in Unix, this would be /0/1. The Home content-node, as a child of /1 would be at /0/1/2. This is called an index path.

Some examples of these index paths are as follows:

/ The root content node.
/0 The first child of the root content node.
/1 The second child of the root content node.
/0/0 The first child of the first child of the root content node.
/0/1 The second child of the first child of the root content node.
/0/2 The third child of the first child of the root content node.
0 The first child of the current content node.
1 The second child of the current content node.

The Content Bean
Content-nodes in the PSI are referenced by the Content bean. This bean lets you search for a particular content-node, view its settings, update the settings, create a new content-node or delete a content-node.

If I was curious to know some details about the Corporate Directory content-node, I could invoke the Content bean to tell me about it. The Content bean uses the following syntax for a search:

$Content find

So to find and display some info about the Corporate Directory node with uniquename wps.myportal.CorpDir, I would use:

$Content find any uniquename "wps.myportal.CorpDir"

More specifically, since I know the CorpDir is of type 'page,' I could execute a more concise search:

$Content find page uniquename "wps.myportal.CorpDir"

There are several different kinds of searches you can do. Executing

$Content help search-types

will show you these different searches. Keep in mind the help function for the Content bean is always available using:

$Content help

Okay, so big deal, you found the content-node. It's far more interesting if you were to display some other info about this node. Before you can display other attributes of the node, you have to "select" the node. Finding it isn't enough, you have to "select" it before running an additional command against it:

$Content find any uniquename "wps.myportal.CorpDir" select

Now that you've got it selected, try any of the following:

$Content get type
$Content get "wps.myportal.CorpDir" id
$Content current

The ID of the content-node (or any portal object for that matter) is the UID or Unique ID of the object. With this interface, most of the actions (get or set) are invoked against the UID of an object. Once you have this id, some of the operations become easier.

There are a few special id values that we can use. The most useful is called 'the root.'

$Content select the root

This command will select the root content node. Second in usefulness is the 'the parent' special id:

$Content select the parent

This command will select the immediate parent of the currently selected content node.

Let's say for example that the unique ID of our CorpDir node (the UID that was auto-generated by the Portal when we created the page) was found using one of the commands above. Let's say for argument's sake that this id is _6_00KJL57F9D04J770_D. We could then issue some other commands such as:

$Content get "wps.myportal.CorpDir" id
$Content get _6_00KJL57F9D04J770_Dthemename
$Content set _6_00KJL57F9D04J770_Dtheme "Finance"
$Content get _6_00KJL57F9D04J770_Dposition

If we were interested in what was underneath our WorkPlace content-node, we could select it and then search on that content-node for objects called compositions (or pages to you and me):

$Content find label uniquename
wps.myportal.WorkPlace select
$Content search composition

This will return a list of all the pages (or compositions) contained under the WorkPlace label.

But surely this isn't the exciting part. No, we're much more interested in creating some content-nodes instead of merely displaying information about them. Luckily for us, the Content bean has a method called 'create.' Let's use our CorpDir content-node as an example (we retrieved its uid in one of the above examples):

$Content select _6_
$Content create composition
"SubPage" html

This will select the CorpDir content-node and then create a page underneath it called SubPage with html as the supported markup.

$Content select the root
$Content create label "NewLabel"
html select
$Content create composition
"NewSubPage" html

This sequence first selects the content root, then creates a new label underneath it and selects this new label, then creates a page underneath it. By contrast, we could also delete the content-nodes:

$Content delete

This command would delete the content-node with the id specified. For safety, the system doesn't let you delete a content-node with children. Gotta make sure those kids have parents!

Lens on Portlets
Even though we can't affect portlets directly using the PSI, we can definitely get some good information about the portlets defined in our portal repository. To do so we use the Portlet bean.

$Portlet search webmodule namehas "News"

Seems pretty straightforward. This command will execute the Portlet bean with the search method against objects of the webmodule type whose names contain the word "News." Once we have those objects returned, we could take the uid of one of them and gather more info:

$Portlet get webmodule
$Portlet get webmodule name

Of course, the Portlet bean also has a help function (as do its methods) in case you get stuck.

Sadly, there's no "set" capability with the Portlet bean. Perhaps we should start a campaign to get that functionality added...

Layout and Hierarchy
A bean that's useful for manipulating the layout of the content-nodes is the Layout bean (rather well named, I think). With the Layout bean, we can use the index path to manipulate our content-nodes. The Layout bean has several methods (that can be seen by executing $Layout help). One such method is 'move.'

$Layout move to 0

This command would move the currently selected content-node to the root position.

$Layout move by 1

This command would move the currently selected content-node 1 level down the tree. By extension, moving by -1 would move the node up the tree one notch. This could be useful if you didn't have a lot of nested pages and whatnot under the currently selected content-node. In the event that you had an incredibly complex content-node tree, you could use another of the Layout bean's methods to transfer the currently selected content-node and all of its children to another parent:

$Layout transfer to

In our page structure example, let's pretend that we got our uid values for the WorkPlace content-node (_6_00KJL57F9D04K630_A) and the Home content-node (_6_00KJL57F9D04K219_C)

$Layout transfer _6_
00KJL57F9D04K630_A to _6_

Once we completed this command, our new page structure would look like this:

Content Root
My Portal
Corporate Directory

We also could have executed the "adopt" method as follows:

$Layout select _6_
$Layout adopt _6_

By first "selecting" the node we wish to have perform the adoption, we can then instruct it to do so. Very cool.

And now for something about the composition of these content-nodes that we're adopting and transferring all over the place. These content-nodes are comprised of rows and columns that together are called containers. Inside these containers we find our portlets that are known as controls.

The Layout bean can be used on both the containers and controls of a content-node.

Issuing the command below will give you the index paths of the potentially very complex layout of containers and controls on a page:

$Layout index

Appending a uid will give you the absolute index path of that object:

$Layout index

If I start off with a blank page that I've created using the Content bean, I would definitely want to use the Layout bean to create horizontal or vertical containers (rows or columns) and controls (portlets) in those containers. If we were to use the CorpDir page as an example:

$Content select _6_
$Layout create container
horizontal select
$Layout create control

This sequence selects the content-node we want to add the portlet to. It then creates a new row on the page and selects it. The last step is to add the portlet to that row.

If we decided later to add some other portlet to this page, we wouldn't want to have to delete the whole thing and re-create it. So in that case we could simply add to our page:

$Content select _6_00KJL57F9D04J770_D
$Layout index /0 select
$Layout create control

This sequence selects the CorpDir page by its ID. Next we select the /0 index path of the selected object. This will return us the parent container of the page (row or column we don't much care, as long as it is the top level object on the page). Lastly we add a new portlet by using its ID.

And Onward...
There are other beans that can be used to impact our portal configuration. We can use the Access bean to read in permissions or the PacList bean to assign them to various objects. And while I reviewed the major beans that one would use to administer the configuration of the portal, my comments are by no means exhaustive.

As with all things WebSphere-related, the proof is really in the hands-on tinkering. Go install WPS5.1 and play around with the Portal Scripting Interface. Before long you'll be doing more advanced tasks such as creating entire JACL scripts composed of many actions and reading them all in at once.

This interface is new. It currently has some limitations. But the direction is pretty clear. In future releases expect the PSI to increase in importance as new functions and beans are added and expanded. In fact, I would expect PSI to supplant XMLAccess as the primary automated administration interface.

There's some information in the WPS 5.1 InfoCenter regarding this new tool. It's currently located at: wp51help/index.jsp?topic=/

Tuesday, July 21, 2009

How to troubleshoot :Users do not have access to syndicated content

After syndicating between two or more Web Content Management (WCM) instances, on the subscriber instance users do not have access to the newly syndicated content. In the security section of the content items, the users appear to have been given access; however, the content is still not accessible to these users in the new environment. In some cases you may no longer see the Author/Owner fields populated or see the user security even defined in the object anymore.
The issue is the way WebSphere Portal, or more specifically WMM (WebSphere Member Manager), expects these users to be managed. Essentially, security in Portal is expected to be shared throughout all Portal environments, that is, all portal instances refer to the same LDAP server. In a true production cluster, this is the way things would be configured. Each instance of Portal allocates each LDAP user a unique Member ID when security is enabled. Web Content Management references users both by distinguished name and WMM ID. So WCM content items will have a reference to the WMM ID from the server that the content was originally created on. Because of this, there are several scenarios where the user security will fail:

1. If you have two or more WCM instances pointing to two different LDAP servers, even if these LDAP servers have the same users within
2. If you have two or more WCM instances without security enabled trying to syndicate between each other

So, in the above two scenarios, content in the delivery cluster cannot find the specified LDAP users, even though they exist in both LDAP servers or non-LDAP environments. When a user logs into the delivery environment, that user cannot see content even though he uses the same username as in the authoring environment because WCM is using both the distinguished name and the WMM ID to find content for that user.
A typical symptom are errors like the following in both the WCM and Portal logs:

14:34:0.719 Servlet.Engine.Transports : 6 com.presence.connect.wmmcomms.PrincipalInformation warn The Member: uid=wpsadmin, o=default organization Could not be found in the User Repository. Reason: Message: EJPSG0002E: Requested Member does not exist.{0}, Cause: EJPSG0002E: Requested Member does not exist.{0}

It must be noted that the above two configurations are neither expected nor supported by WebSphere Portal. As mentioned, Portal expects sharing of a central LDAP repository across instances. However, there are several solutions that may alleviate or remedy this problem.
Possible Solutions:

1. Remap the WMM External ID to an LDAP attribute; you use distinguished name or some unique attribute as the unique identifier. Then when syndicate, you can then change the attribute as needed. Whatever attribute is chosen, must be unique.

To do so, follow the Portal Information Center topic "Mapping external IDs (extId) in Member Manager" about how to do this mapping:

2. Have the same user/groups and unique IDs across any and all LDAPs that are a part of the WCM syndication scenario. This can be achieved by Option 1 above or by possibly exporting LDAP ldif files and/or maintaining some kind of LDAP replication across servers.

3. Apply all the latest WCM Member Fixer fixes and run the WCM Member Fixer at regular intervals or after every syndication.

4. Set up user access on the server with WCM/WPS virtual groups like All Users, Anonymous, All Authenticated Portal Users. What this means is that for all environments, the customer can set access on objects using the [All Users] group and Anonymous access or any other virtual group as the entry is not stored in WMM / LDAP. So, if you want to have user A, B and C develop in one WCM instance, but not have access to objects in the second instance, you could give access to the virtual group and then the security would be retained. If you adopt this solution, you may still want to run the WCM Member Fixer to clean up the data as you will see a lot of warning in the logs. The limitation to this solution is that it will ONLY work for Virtual Users/Groups as these are not stored in WMM. Any user/group definition stored in WMM/LDAP will not work.

5. Leave the security different across all WCM server instances. In this scenario, you would then update your data set for whatever level of user access you desire in each environment. You will then need to manually update the WCM objects for proper security.

6. Use the same LDAP for all environments that WCM is syndicating over.

There may be other solutions that have not been mentioned which might work for other scenarios.

Monday, July 20, 2009

Single SignOn from the operating system desktop to WebSphere Portal

How To:
A browser can automatically authenticate a user against the WebSphere Portal, based on the login to the operating system desktop. This feature is sometimes requested by customers for employee portals.


Warning: Some of the configurations described here are not officially supported. For details, see the Hint and Tip document 1104689: "Basic authentication is not supported as a login mechanism for WebSphere (R) Portal". These example configurations are intended for demo and proof of concept projects. For production environments, official support needs to be confirmed or negotiated on a case-by-case basis.


The Internet Explorer (IE) web browser can automatically use current or stored passwords to log in to web sites using one of the three HTTP authentication schemes basic, digest, or NTLM authentication. If the portal is configured to use one of these schemes instead of the default form-based authentication, the users do not have to log in to the portal explicitly.
This document describes three different basic architectures of how to achieve this, along with some variations in the configurations. The approaches differ in that the user is authenticated by using one of the three following components:

An authentication proxy, such as Tivoli Access Manager (TAM)
The WebSphere Application Server
An HTTP server

The authentication proxy solution is the only one to provide a viable alternative for production use. The other two approaches may be useful for demos and proofs of concept. Tests have been conducted using WebSphere Application Server Version 4.0 and WebSphere Portal Version 4.2, but the configurations described here can be applied to other versions as well.

This document is structured as follows:

The following section considers client requirements.
The next section after that gives background information required to understand the server discussion.
The last section addresses the server side architecture, with a subsection dedicated to each of the three approaches.

Browser and Platform Considerations

Internet Explorer on the Windows platform tries to automatically log in to selected web sites by using a stored user ID and password, or the user ID and password of the current user of the operating system. The default security settings allow this automatic login for the intranet zone and for trusted sites. If the HTTP server of the portal is found to belong to the intranet zone, nothing else has to be done. Otherwise, it has to be added explicitly to the intranet zone or, if HTTPS is used, to the list of trusted sites. This has to be done for each client machine. Due to security considerations, automatic login should never be used outside of an intranet or secure connection. It has not been verified whether IE on Apple MacIntosh computers provides the same features for automatic login.
Mozilla, as an example for other browsers, never logs in automatically, but prompts the user with a password dialog. If the password manager stores the user ID and password for the site, a single click is sufficient for login. While this does not provide an SSO experience, the portal will at least remain accessible for non-IE users with little extra effort. However, Mozilla supports the NTLM authentication mechanism only on Windows platforms.

The Authentication Problem

Requests and Sessions

Web applications such as the portal receive individual HTTP requests from all users. Each request for a protected resource, such as the protected area of a portal, needs to be authenticated individually. This means that a component is required to verify the identify of the user who sent the request, for example by checking a password. An HTTP session is constituted by all requests from a single user. Requests of a particular session are typically recognized by means of a cookie, which holds the session ID and is sent with each request.
The usual setup of the portal prompts the user for the password once, when the user enters the protected area of the portal. The user provides the user ID and the password. The server verifies the password and provides the LTPA cookie for that user ID and session. The LTPA cookie is sent with all subsequent requests in that session and proves that the user has been identified. The password itself is sent only once.

The authentication methods basic, digest, and NTLM mentioned above do not rely on a cookie or session. They add authorization data to each request, and the authorization can be checked for each request again. This poses a problem especially in the case of basic authentication, where the authorization data is the user ID and password in clear text. Each request exposes the password to eavesdroppers. Note that the portal still needs a session cookie, even if authentication is achieved without the LTPA cookie by one of these methods.

Combining per-request and cookie-based authentication schemes has the following consequences:
After the initial authentication, the browser continues to send the authentication data along with the LTPA cookie in each request. When the user logs out of the portal, the LTPA cookie is deleted, but the authentication data is still sent by the browser for subsequent requests to the portal. The next access to the protected area of the portal logs the user back in with the same user ID and password as before. For the scope of this document, that is exactly the intended Single SignOn behavior. In general, however, some users might want to log out and log back in under a different user ID, This can be the case, for example, if they have both a regular user and an administrator account. In that case, the browser program needs to be closed and restarted to remove the authentication data.

User Repositories

Automatic login to the portal involves up to three repositories where user data is stored:

First, there is the browser which selects the appropriate user ID and password for the site.
On the other end is WebSphere Application Server, which is configured to look up users in an LDAP directory.
Optionally, there is TAM or the HTTP server which perform the authentication on behalf of WebSphere Application Server.
In order to provide a seamless SSO experience, these user repositories need to be synchronized.
IE tries to use either the current user's ID and password to log in to a web site, or some user ID and password stored previously in the password manager. If neither approach succeeds, a password dialog is presented to the user. Other browsers typically have a password manager as well. The repository of the password manager runs out of synchronization each time the portal password is changed. For the next login, the user has to enter the new password explicitly. The password manager then stores the new password.
Assuming a password lifetime of 6 months, the manual update of the password manager repository is usually acceptable. If not, the portal and the desktop have to rely on the same user repository, which is probably Microsoft Active Directory, or maybe a Samba server that synchronizes with an LDAP directory.

Approaches to Single SignOn

As mentioned in the overview, all approaches rely on activating an authentication mechanism that can be handled by a browser automatically. In other words, the authentication is moved from the application server level (form-based) to the HTTP level. To log in to the portal, the protected area, by default under the path /wps/myportal, needs to be accessed. The login link on the public portal pages needs to be modified so it points to the protected area rather than to a public login page. There is another reference to the login page in the portals deployment descriptor (see subsection "Single SignOn via WebSphere Application Server Security" below). That should be changed to point to a publicly accessible error page, as it is not used if Single SignOn is configured correctly.
The following subsections discuss some alternatives for protecting the /wps/myportal area on the HTTP level, and how to transfer the authentication on the HTTP level back to the application server level. The latter basically means that WebSphere Application Server has to be informed about the successful user authentication.

Single SignOn via an Authentication Proxy

This is the only approach that looks promising as an architecture suitable for production use. It also is the most complex setup. The following description is an architecture overview rather than an installation guide. In the following graphic, you can see the major components directly involved in processing a request.

The components in red come with TAM or a third-party authentication proxy. They consist of the authentication proxy itself, a plugin installed in the HTTP server, and a so-called trust association interceptor (TAI) installed with WebSphere Application Server. When the request from the browser comes to the HTTP server, the plugin decides whether the user needs to be authenticated. If so, the plugin interacts with the authentication proxy to authenticate the user. This step usually involves more messages between the plugin, browser, and authentication proxy than are shown in the graphic.

Only when the user is authenticated successfully, the original request passes through the plugin and HTTP server to WebSphere Application Server. There, the TAI indicates to WebSphere Application Server that the request has already been authenticated. When an authentication proxy is involved, WebSphere Application Server is configured to use the same user repository as the authentication proxy itself.
Authentication proxies typically provide a selection of authentication options, including basic and digest authentication, form-based authentication, certificate-based authentication, and more. Here, one of the HTTP level authentication mechanisms has to be configured for protecting /wps/myportal. If you use more complex schemes, the authentication may be requested from a different HTTP server, so that the browser does not send authentication data for the subsequent requests to the portal.
Current versions of TAM support NTLM as an authentication option and Active Directory as a user repository. This way, it is possible to create an installation where the client (and therefore the browser), the authentication proxy and WebSphere Application Server use a single user repository.

Single SignOn via WebSphere Application Server Security

The second approach is simple, but has some restrictions. You can configure WebSphere Application Server and the portal to use basic authentication instead of form-based authentication. After you install the portal, search for the deployment descriptor of the WPS web application. Its path should be something like


where $WAS_HOME stands for the installation directory of WebSphere Application Server. Towards the end of the deployment descriptor file, you find the section where the authentication mechanism is configured. Change it from form to basic and restart the portal.
On the first access to /wps/myportal, WebSphere Application Server sends a reply that basic authentication is required. The browser sends the following request with authentication data. WebSphere Application Server then checks the user ID and password against its user repository, and, if the password is correct, creates the LTPA cookie.
This approach has several drawbacks:

This configuration is not officially supported.
Only WebSphere Application Server knows that the user is authenticated. It is not possible to include resources served directly by the HTTP server into the same security domain, or at least not without the requests taking a round trip through WebSphere Application Server, which degrades performance.
Finally, this configuration has been found to cause problems when tested with IIS as the HTTP server. It seemed that IIS tries to verify the authorization data although not configured to do so. As the user repository was only configured for WebSphere Application Server, all requests were either rejected by IIS or by WebSphere Application Server.

Single SignOn via HTTP Server Security

The third approach achieves a balance between the other two. It uses the built-in capabilities of the HTTP server to authenticate the user and a custom TAI to indicate the succesful authentication to WebSphere Application Server. On the one hand it is quite easy to implement a custom TAI that enables successful login. On the other hand it is quite difficult to implement one that is resistent against attacks. That is why this approach is suitable only for demo and proof of concept use. In a production environment, security must be provided by reliable components, such as the TAI that ships with TAM.

HTTP servers come with various options for user authentication that differ in the type of authentication as well as in the user repository on which they are based. If the HTTP server is configured to use a different user repository than WebSphere Application Server, both repositories must be kept synchronized manually. This may be feasible for demos or proofs of concept which involve a small set of test users who are known ahead of time. Self-registration cannot be implemented without a common user repository. The remainder of this section gives an overview of the options provided by Apache, IBM HTTP Server, and IIS.

IIS supports basic and NTLM authentication of the users in a domain. Active Directory can be used as the common user repository for IIS and WebSphere Application Server. The drawback of IIS is that security is managed based on virtual directories. The public and protected areas of the portal are not listed as virtual directories. Therefore, the whole server has to be protected, which disables anonymous access to public portal pages. Public information about the NTLM authentication mechanism is scarce, but the user ID that needs to be retrieved by the TAI should be easily accessible in the authentication data.

Apache in both Versions 1.3 and 2.0 comes with a selection of authentication modules. The standard modules for basic and digest authentication rely on manually updated files as the user repository. Version 2.0.41 and above come with an experimental module that performs basic authentication against an LDAP directory. Third-party LDAP authentication modules for version 1.3 exist, but there are dependencies on proprietary LDAP libraries, issues with the licenses of the authentication modules, and the problems with having to compile the modules for the respective platform. A third party NTLM authentication module for Apache 1.3 is available, but it has not been updated for a while, although security problems have been reported. The license and compile issues mentioned above for third-party authentication modules apply to the NTLM authentication module as well.

The IBM HTTP Server (IHS) is based on Apache and provides the same authentication modules. Additionally, IHS in both Version 1.3 and 2.0 ships with a module that supports basic authentication against an LDAP user registry. The configuration of the IBM LDAP authentication module is more complex than for the Apache 2.0 standard module, but it also provides more functionality. The configuration is described in the Info Center for IHS on the IBM web page, though not in the Apache manual that is installed automatically with IHS. You have to edit the sample ldap properties file $IHS_HOME/conf/ldap.prop.sample and then reference that file from within the httpd.conf file in the same directory.

Programming guidelines and detailed installation instructions for TAIs are available in the InfoCenter for WebSphere Application Server. To install a TAI, you have to put the implementation in the class path, for example in $WAS_HOME/classes. Then modify the file and enable TAI from the security center of the WebSphere Application Server administration console.

Sunday, July 19, 2009

How to assign a new portal administration group aside from wpsadmins

For a client who is using a single LDAP repository for both Staging and Production. However, they wanted a separate portal admin group for Staging and Production. For production environment, wpsadmins is sufficient but for staging, aside from wpsadmins, they wanted to add a new group meant for the staging administrators. Staging administrators can only access staging server and shouldn't access Production Environment. In this scenario, we cannot add the staging users under wpsadmins group. To resolve this scenario, the procedure below helps you to add a new admin group on portal different from the frequently used wpsadmins.

1. Login to your WebSphere Portal as an administrator
2. Go to the Administration Page
3. Click on the Access -> Resource Permissions and choose Virtual Resources
4. Click on Permission Icon beside Portal
5. Click on the Permission Icon for Administrator
6. Add the new administrator group you would like. In our example its wpsdevadmins
7. Click back on Portal
8. logout

Test any user under this group. Similarly, you can also delete wpsadmins from this group if there's a need.

How to fix Portal Access Control settings after user/group external identifiers have changed

The access controls on resources in IBM WebSphere Portal are linked to external identifiers associated with each user/group stored in LDAP. The requirements for such an external identifier includes that it be static and unique.

However, in certain scenarios in which an LDAP server is changed or users or groups are removed and re-added directly to the LDAP, the external identifiers are no longer the same. Such scenarios cause duplicate users/groups to be created in the Portal database. These duplicates are then used for access control calculations on the Portal server when users log in, while the original user and access control information is considered orphaned.

When checking access control settings using the Resource Permission portlet, you may see blank entries or users will no longer be able to view resources to which they previously had access.

The attribute value (or attribute itself) in the LDAP that is mapped as the external identifier has changed.

Resolving the problem
There are two conditions necessary for the following procedure to work.
(1) The LDAP schema should not be changed, meaning user and group distinguished names remain the same.

(2) Apply the interim fix, PK59896, if running WebSphere Portal or earlier, and PK83289 if running Portal Before following this procedure, IBM Support recommends backing up your Portal databases.

Step 1: Run XMLaccess with CleanupUsers.xml as input file: -in CleanupUsers.xml -user wpsadmin -pwd wpsadmin -url localhost:10038/wps/config -out invalidusersgroups.xml

where CleanupUsers.xml can be found in directory /doc/xml-samples. This step generates a set of invalid users and groups in file invalidusersgroups.xml.

Step 2. The decision must be made whether to delete the invalid users and groups using this step or a later step. We recommend you leave them in the Portal database temporarily.

Make the following changes to the file invalidusersgroups.xml in the "request" tag:
(a) Set "cleanup-users" to false. Add "migrate-users" and set it to true.

(b) Make sure the schema version shown in the XML file is "PortalConfig_6.0.1_1.xsd" or later. invalidusersgroups.xml should now look like this:


(c) If the schema version is not exactly "PortalConfig_6.0.1_1.xsd" or later, open the file wp.xml.jar (in directory /shared/app/ if version 6.0.x, or /base/wp.xml/shared/app if 6.1). Verify the schema version file "PortalConfig_6.0.1_1.xsd" is included in wp.xml.jar, and correct the schema in the request tag to 6.0.1_1 or later.
NOTE: If you see references to users or groups based on the original out-of-the-box user registry (uid=wpsadmin,o=default organization), remove the references from the XML file or they could potentially cause the next step to fail.

Step 3. Run XMLaccess with invalidusersgroups.xml as the input file: -in invalidusersgroups.xml -user wpsadmin -pwd wpsadmin -url localhost:10038/wps/config -out migration_out.xml

At this point, the access control mappings have been migrated to the current external identifier used by the users and groups in the LDAP. However, there are still orphaned user and group entries in the USER_DESC table of the Portal database that should be removed, which is addressed in next step.

Step 4. (Optional but recommended) Run XMLaccess with /doc/xml-samples/CleanupUsers.xml as input a second time: -in CleanupUsers.xml -user wpsadmin -pwd wpsadmin -url localhost:10038/wps/config -out removeusersgroups.xml

where removeusersgroups.xml is essentially the same as invalidusersgroups.xml which contains the set of orphaned user and group references in the Portal database.

Step 5. (Optional but recommended) Run XMLaccess with removeusersgroups.xml as input to delete the orphaned users and groups: -in removeusersgroups.xml -user wpsadmin -pwd wpsadmin -url localhost:10038/wps/config -out cleanedDB_out.xml

NOTE: Ensure that "cleanup-users" is set to true (the default setting) in removeusersgroups.xml for this step.

Step 6. Verify the Portal access control settings by logging into the Portal server and confirming that users can view the resources to which they have permission.

If you have WCM content, you should run the WCM MemberFixer tool. Before running MemberFixer, complete Steps 4 and 5. Reference the appropriate 6.0 or 6.1 Information Center link below depending on the version of your Portal for further details on the MemberFixer tool.

NOTE: If the LDAP user or group DNs are different after the LDAP change, the above procedure will not work. Contact IBM Support for further details if your user and/or group domain names will change.

Sunday, July 12, 2009

Access the HttpServletRequest object in websphere portal

HttpServletRequest req = (HttpServletRequest)request.getAttribute("javax.portlet.request");
HttpServletResponse res = (HttpServletResponse)request.getAttribute("javax.portlet.response");

The request object can be a PortletRequest Object or a ActionRequest object.

Friday, July 10, 2009

How to copy content between content libraries using the Web Content Management (WCM) API

You would like to use the IBM® Web Content Management (WCM) API to copy content from one WCM content library to another WCM content library. Which WCM API methods can be used?

Resolving the problem
The WCM API provides methods which allow copying content items between WCM content libraries.
Example algorithm to copy content between WCM libraries

1. First get the user workspace.
2. Next get the source and target document libraries.
3. Set the source document library to be your current document library.
4. Build the site document iterator.
5. Get the site document id.
6. Copy the site using the copyToLibrary method. (note: This method copies non-hierarchical or root items to another library.)
7. Get the new site document id.
8. Build the sitearea document iterator.
9. Get the sitearea document id.
10. Copy the sitearea using the copySiteFrameworkDocument method. (note: This method copies hierarchical items (SiteArea, Content, or ContentLink) to another library.)
11. Get the new sitearea document id.
12. Build the content document iterator.
13. Get the content document id.
14. Copy the content using the copySiteFrameworkDocument method. (note: This method copies hierarchical items (SiteArea, Content, or ContentLink) to another library.)
15. Get the new content document id.

** Note:** This example assumes the content item and associated parents (site/sitearea) do not already exist in target library.

Example code to sites, site areas, and content items between WCM libraries

//define variables
DocumentLibrary sourceDocLib = null;
DocumentLibrary targetDocLib = null;
DocumentLibrary currentDocLib = null;

DocumentIdIterator docIdIterator = null;
DocumentId docId = null;

Site currentSite = null;
SiteArea currentSiteArea = null;
Content currentContent = null;

Document newSiteDoc = null;
DocumentId newSiteDocId = null;

Document newSiteAreaDoc = null;
DocumentId newSiteAreaDocId = null;

Document newContentDoc = null;
DocumentId newContentDocId = null;

//set the content library variables
sourceDocLib = ws.getDocumentLibrary("SupportLib");
targetDocLib = ws.getDocumentLibrary("TargetLib");

//standard out log message
System.out.println("Log: The source document library name: "
+ sourceDocLib.getName());
System.out.println("Log: The target document library name: "
+ targetDocLib.getName());

//set the current content library to the source library

//get the current content library
currentDocLib = ws.getCurrentDocumentLibrary();

//standard out log message
System.out.println("Log: The current document library name: "
+ currentDocLib.getName());


//finds the document id of the sites that match by name
docIdIterator = ws.findByName(DocumentTypes.Site, "SupportSite");

//loops through the document id's found in the iterator
//get the current document id
docId = (DocumentId);

//get the current site
currentSite = (Site)ws.getById(docId);

//standard out log message
System.out.println("Log: Copy site: "
+ (String)currentSite.getName()
+ " from " + sourceDocLib.getName()
+ " to " + targetDocLib.getName());
*The Workspace copyToLibrary method copies non-hierarchical or
*root items to another library.
*Get the new document id to the new document copy.
newSiteDoc = ws.copyToLibrary(targetDocLib , docId) ;

newSiteDocId = newSiteDoc.getId();

}//end while


//finds the document id of the site areas that match by name
docIdIterator = ws.findByName(DocumentTypes.SiteArea, "Home");

//loops through the document id's found in the iterator
//get the current document id
docId = (DocumentId);

//get the current site area
currentSiteArea = (SiteArea)ws.getById(docId);

//standard out log message
System.out.println("Log: Copy site area: "
+ (String)currentSiteArea.getName()
+ " from " + sourceDocLib.getName()
+ " to " + targetDocLib.getName());
*The Workspace copySiteFrameworkDocument method copies hierarchical items
*(SiteArea, Content, or ContentLink) to another library.
*Get the new document id to the new document copy.

newSiteAreaDoc =
ws.copySiteFrameworkDocument(docId, newSiteDocId, null, ChildPosition.END) ;

newSiteAreaDocId = newSiteAreaDoc.getId();

}//end while


//finds the document id of the content items that match by name
docIdIterator = ws.findByName(DocumentTypes.Content, "WelcomePage");

//loops through the document id's found in the iterator
//get the current document id
docId = (DocumentId);

//get the current content
currentContent = (Content)ws.getById(docId);

//standard out log message
System.out.println("Log: Copy content: "
+ (String)currentContent.getName()
+ " from " + sourceDocLib.getName()
+ " to " + targetDocLib.getName());
*The Workspace copySiteFrameworkDocument method copies hierarchical items
*(SiteArea, Content, or ContentLink) to another library.
*Get the new document id to the new document copy.

newContentDoc =
ws.copySiteFrameworkDocument(docId, newSiteAreaDocId, null, ChildPosition.END) ;

newContentDocId = newContentDoc.getId();

}//end while

Javadoc HTML reference files

WCM api-javadoc: The Javadoc HTML files are located in the following location on your Web Content Management server:

under :

Sunday, July 5, 2009

Setting up a development environment for iWidgets with RAD 7.5

About a month ago I started to look at how to develop iWidgets for the IBM Mashup Center 1.1. I come from a J2EE WAR based programming background so was interested on how to configure my Eclipse based tooling to develop iWidget in the easiest way possible.

The following steps worked for me and allowed a very quick edit, save, publish, test cycle where I could edit the iWidget code and have it running in my browser in seconds. IMC 1.1 does support OSGi packaged iWidgets I will look into that next time.


When developing iWidgets for IBM Mashup Center (IMC), rather than repeatedly create and deploy new WAR files to IMC, it is possible to configure RSA/RAD 7.x such that the WAR files you are developing can be integrated with the IMC runtime from your workspace. This way, you can simply modify and test any changes by reloading the Lotus Mashups page in the browser. The following steps are required.

Create a WAS 6.1 server configuration for RSA/RAD that points at the WAS installation for IMC. If you installed IMC in the directory C:IMC for example, you will need to configure a runtime pointing at the directory C:IMCAppServer.
Use the RMI connector to make the server attachment, RSA/RAD should suggest the correct port for you to use for your connector e.g 2811. You should now be in a position to deploy assets from your workspace into the IMC WAS installation.
Create a Dynamic Web Project to contain your new iWidget together with an EAR in which to contain the resulting WAR. This is the container into which you put the contents of your widget together with the required deployment descriptors and so on.A description of what these look like can be found in the Lotus Mashups help pages together with the details of the iWidget programming guide. A simple iWidget WAR is attached to this page by way of an example.
Add the new EAR file to the server configuration you created earlier. It should now be available via the IMC WAS application server. You can test this by requesting the URL of the XML manifest for your iWidget using your browser — it should be visible through the newly deployed WAR file e.g.
where CardReader is the name of the WAR file.

Stop the IMC WAS instance using the RSA/RAD Servers controls.
With IMC stopped, we now need to edit the Catalog file so that the widget we are developing appears on one of the drop down menus in the Lotus Mashups editor page. This provides the linkage between the Lotus Mashups environment and our workspace widget. Look within your IMC installation for the file called catalog_admin.xml which if you have installed into C:IMC will be found in C:IMCmmpubliccat. Make a backup copy of this file.
Open the catalog_admin.xml file in WordPad. You will see it is organised into a hierarchy of categories and entries.
Copy an entry element and its contents and paste a copy within one of the category elements. Easiest option is simply to paste it below the entry that you just copied.
Modify the entry to contain a meaningful name, description and id and unique-name values on the entry element. No matter how strong the urge DO NOT REFORMAT THE XML. Preserve the original formatting.
Set the contents of the definition tag within the entry tag to the URI of your XML manifest. This links the menu item to your specific widget e.g.
to continue the above example.

Save the XML file.
Restart the IMC WAS runtime from RSA/RAD.
Load Lotus Mashups in your browser and login.
You may see a JavaScript dialog box containing the message “TypeError - 103e is null.” Simply press OK and then the reload button your browser.
You should now see the Lotus Mashups Welcome Page as usual.

Press the Go to View button on the Welcome Page. You will see a set of tabs appear containing the categories you saw earlier in the catalog XML file.

Find the category into which you added your widget earlier. You should now be able to drag your widget from the menu onto the Lotus Mashups page.

Now, when you make changes to the widget in RSA/RAD, you can pick them up simply by reloading the Lotus Mashups page. When your widget is ready, you then deploy it by exporting the WAR file.

Mapping attributes between LDAP and WebSPhere Portal

Perform the following steps to map attributes between WebSphere Portal and your LDAP server; if you have multiple LDAP servers, you will need to perform these steps for each LDAP server:

* Run one of the following tasks to check that all defined attributes are available in the configured LDAP user registry

o Stand alone: wp-validate-standalone-ldap-attribute-config

o Federated: wp-validate-federated-ldap-attribute-config

* Open the config trace file to review the following output for the PersonAccount and Group entity type:
The following attributes are defined in WebSphere Portal but not in the LDAP server
This list contains all attributes that are defined in WebSphere Portal but not available in the LDAP. Flag attributes that you do not plan to use in WebSphere Portal as unsupported. Map the attributes that you plan to use to the attributes that exist in the LDAP; you must also map the uid, cn, firstName, sn, preferredLanguage, and ibm-primaryEmail attributes if they are contained in the list..
The following attributes are flagged as required in the LDAP server but not in WebSphere Portal
This list contains all attributes that are defined as "MUST" in the LDAP server but not as required in WebSphere Portal. You should flag these attributes as required within WebSphere Portal; see the step below about flagging an attribute as either unsupported or required.
The following attributes have a different type in WebSphere Portal and in the LDAP server
This list contains all attributes that WebSphere Portal might ignore because the data type within WebSphere Portal and within the LDAP server do not match.

* Enter a value for one of the following sets of parameters in the file to correct any issues found in the config trace file:
The following parameters are found under the LDAP attribute configuration heading:

* standalone.ldap.attributes.nonSupported
* standalone.ldap.attributes.nonSupported.delete
* standalone.ldap.attributes.mapping.ldapName
* standalone.ldap.attributes.mapping.portalName
* standalone.ldap.attributes.mapping.entityTypes

* Run one of the following tasks to update the LDAP user registry configuration with the list of unsupported attributes and the proper mapping between WebSphere Portal and the LDAP user registry:

o Standalone wp-update-standalone-ldap-attribute-config

o Federated: wp-update-federated-ldap-attribute-config