When Acuity (now Sage 500 ERP) was first designed the team had specific goals in mind for concurrency and data integrity. With SQL Server 6.0 as the target database platform, they knew that there were limits to the locking mechanism available in the database. So they designed a “logical” locking mechanism into the Acuity SDK that would allow row-level locking of transactional data outside of the limitations of pessimistic locking available at the time in SQL Server.
There is a global table used for this mechanism in the database called tsmLogicalLock. There is a single stored procedure that controls adding rows to this table, spsmLogicalLockAdd (there are indeed higher level stored procedures and Common DLL functions accessible by developers, but all roads lead to this one sp).
Logical Locking basically uses a combination of a logical lock type, (defined in a supporting table), a lock type (essentially exclusive or shared), and a logical lock ID to uniquely identify a lock on a particular resource. The SPID of the user creating the lock is also recorded, along with a date-time stamp indicating the login time of the SPID.
Typically the following steps are followed when applying a logical lock:
1. Cleanup locks on this resource. Using the LogicalLockType, LockType, and LogicalLockID we look for a matching row in tsmLogicalLock. If one is found, and the SPID is dead or the login time is past a specified window, we remove the lock as it is considered an expired resource lock.
2. With the cleanup completed, we now check for any existing valid locks on the LogicalLockID for that LogicalLockType. If found, we return an appropriate return value to indicate that the resource cannot be locked (this can be tiered by Shared Lock or Exclusive Lock, although in practice I have normally seen only Exclusive locks used).
3. If the resource wasn’t already locked, we can now insert our own logical lock against using the previously described data.
So as you can see, the intent of the Sage 500 Logical Locking mechanism is to allow (typically) row level locks against transactional data during routines that will change the state of that data (posting, inventory commit, etc.).
Once you have completed your use of the lock, you’re supposed to then explicitly release it with spsmLogicalLockRemove.
Challenges with Logical Locking
This mechanism is workable, but is not perfect. There are a number of issues that can befall it, these are but a few from my experience:
• Routines that act on logically locked rows fail and then fail to implement any logical lock cleanup process. This leaves orphaned locks, which can take a while to expire if the original SPID is still active in the database.
• Effectively cleaning up orphaned locks is (relatively) time consuming.
• Routines that ignore logical locking and act upon logically locked rows, breaking the original data manipulation rules of the Sage 500 ERP (Acuity) SDK.
• Anything that causes locking or blocking against tsmLogicalLock. This can lead to widespread blocking of routines that are using logically locked rows.
• Failure to use logical locking in a consistent manner, so that related, overlapping processes place their own lock types on the same database rows, effectively ignored each other.
In our personal experience with Sage we would often run into challenges with the Logical Locking mechanism. The key is in building your LogicalLockID values intelligently and also in managing the timing of state transitions so that you’re not trying to add the same rows to tsmLogicalLock twice during some process. Also, SQL Transaction boundaries can be tricky depending on how and where you call the logical locking / unlocking routines, as you might inadvertently rollback rows you thought you had locks against, or (worse) cause blocking on tsmLogicalLock.
We hope this helps provide you with an overview of the Sage 500 Logical Locking process. If you have questions or want to learn more you can contact us at askwalt @ rklesolutions.com or 717-735-9109 x4.