Sending Alerts to the SysLog

12-5-2008 10-23-24 AMOn my jPOS page I added a link to a screencast that I did that shows the basic configuration and usage of the jPOS SysLogListener.

If you are not familiar with syslog, it is the logging daemon for Unix and Linux. There are implementation’s for MS Windows as well, such as Kiwi Syslogd. (and some that peel entries from the Windows/NT Event Log and forwards then to a centralized syslog server) Many alerting systems are based off of syslog events where you can define an action to call an external program/script, send an email/page/SMS notification. Or you can even use splunk as a syslog daemon and “google” your logs.

Enjoy the screencast.

mobile commerce - sms text notifications

Picture 20If you have ever used Obopay or even social networking site Facebook, chances are that you have interacted with your mobile phone with these sites in some manner with your phone. Obopay, is a little more obvious, but you receive text notifications when you send or received money on your mobile. Facebook sends text messages to your registered mobile phone number for you to validate your account, Obopay also uses multi-factor authentication to validate the user of its website using a phone call and spoken code, or a text with a message and a code that need to type in a webpage. This is called Out-of-Band Authentication and your bank may have implemented something similar for its Internet banking.

Yesterday, I researched and implemented text notifications when you perform an Reload or Add Money transaction on our issuing platform to your prepaid card using an interface to a SMS Gateway. Check it out below: I’m us my Nokia E71 here.

Fun with Card Readers and Encoders/Writers

I received a MSR505c Card Reader/Writer in the mail today. I use and have a need to create test cards that have magstripes for a variety of purposes; The main one being a way to test/demo our issuer based products from Point-of-Sale (POS) systems and payment terminals.

I thought I create a short screencast to show how this works, which is provided below:

Some considerations to note:

It is extremely easy to “clone” a payment card using a device such as this, and the entry point from a cost and availability perspective is low (~$300 range). In a follow-up blog post, I’ll write about Maktek’s MagneSafe and MagnePrint products to detect card cloning at a magstripe level.

Picture 19

Virtual Point of Sale (OLS.vPOS) & Virtual Terminal (OLS.vt)

Here is a snapshot of what my desk looks like: you can see a magtek USB card reader and a few magnetic striped cards; expired pre-paid credit, gift and merchandise return cards that used for testing purposes here.


I’ve been developing some small tools that allows for us to send transactions via a swipe in a .NET windows based application as well as in a Java Web based version to a test instance of OLS.Switch. I used to (and still do) just pipe binary message dumps over netcat pointed to our OLS.Switch’s configured server port for this specific message format.

for example:

$ cat visa_credit_sale.dump | nc 33000

where visa_credit_sale.dump would just be a binary file of the message

$ hd

would look like this (intentionally blurred and is a test card number)

10-2-2008 8-23-24 PM

Here is a shot of the Virtual Point of Sale System:

OLS vpos

and a shot of the Virtual Terminal:


VT Response

Basically you can swipe a card or key-enter a card on the virtual terminal and depending on the configuration of OLS.Switch - (I’m using bin based routing here in this test setup)

Cards that start with:

  • 4 - Visa
  • 5 - Mastercard
  • 6011 - Discover

go to our FDR North (ChasePaymentTech) Simulator and and return a simulated response.

  • 3 - Amex

go to our American Express Simulator

  • 7 - Stored Value

go to our Stored Value Systems Simulator

  • 6 - OLS Stored Value

get switched to our own instance of OLS.Issuer - our authorization host which is not a simulator.

The vPOS and VT are sending in messages in the Visa K/Visa D or otherwise known and Visa Gen II message format (one of the incoming message formats that we support from the device side) and depending on the card type, we are building the appropriate outbound message according to the interface specs (generally an ISO8583 variant), hitting our simulators to get different responses based on amount prompting or in the case of the OLS Stored Value cards, it uses the card files, velocity and limit checking, card status and other authorization rules to authorized the card.

The neat thing? an end-to end transaction take less then 50ms on a sub $1000.00 test server on a local lan.

10-2-2008 8-30-17 PM

Here is a link to a PDF that shows the full transaction flow.


Product Release Cycle

updateI’ll just say it; I’m proud of our release cycle for OLS.Switch.

It has been my experience (YMMV) both first hand running an authorization host/switch (issuing and acquiring) and as an IT Security Auditor and QSA - that either Core Banking applications or Payment Switches fall into one of the following when it relates to upgrades, changes, or security updates:

  • “The Vendor set it up, we don’t touch it”
  • “We don’t patch it because we are afraid”
  • “We cringe everything we need to install a new release of the software”
  • “Last time we did an upgrade, we had x amount of downtime”
  • “It all goes smooth like clockwork” :)

During Vulnerability Assessments and Penetration Testing on the Internal Networks that I performed– My observations from an operating system, database and application perspective - these systems are typically not keep current or run on a platform that the organization is not very familiar which and relies on outside support. The application was not cohesive to the rest of their operating environment: systems, technologies, and procedures.

Installing new releases of our software (or rather our clients installing releases of new software) is something that does not make me cringe. (and I used to not sleep very well in the past) At least one of our clients seems to agree as well. (See Andy’s “A very simple platform to support”)

We just rolled out a new release that was quite large (see Flexible Spending Accounts (New Initiatives, Part 3) and had changes that impacted pretty much every transaction path due to partial authorization and credit reversal support, and required heavy regression testing. Our agile based SDLC is a big help with this, we have very iterative development processes and frequent testing which also means less large bulking updates that break everything.

Another success factor is our simplicity of upgrading our program code and binaries. It is really as simple as:

  • Stop the OLS.Switch Service or Daemon
  • create a backup copy of the directory or file path where OLS.Switch is installed
  • Start the OLS.Switch Service or Daemon
  • Perform test transactions and monitor.
  • The Back-Out plan is to stop the service and revert back to the backup copy.

Further, System Implementation Design can have a big impact on Up-time; we run multiple independent application servers behind load balancers, that allows us to gracefully stop an application - which stops accepting new transactions when finishing to process those in its queue, and the load balancer stops routing transactions to this application server. Allowing an upgrade to be made while other application servers are still processing transactions. Uptime, not system uptime, but uptime processing transactions doesn’t have to suffer for “scheduled maintenance”, or security related patches and reboots.

I think we have a low-risk upgrade/update path that our clients are very comfortable with - So in 7 months into the year - we have had a dozen releases to add additional functionality, address endpoint changes, and implement new transaction types.


Payment Processing Application Performance

One of the frequent concerns about deploying any payment solution is “will it be able to process my transactions in a timely manner?” This is both an easy and hard question to answer. In some instances, a bad application design can lead to poor performance. In others, it is faulty system integration of one or more of the other components causing performance bottlenecks. Generally, there are several major components in the processing of a transaction that can significantly affect throughput and response time performance.

These major components are the network, the server hardware, the encryption hardware, the system software, and the application software. How well these pieces are integrated will always have a major impact on the overall performance of any given payment solution. For example, if a throughput rate of 25 transactions per second (TPS) is required and the hardware encryption device selected is only capable of 12 encryptions or decryptions per second, then the encryption device will be the bottleneck and no amount of software tuning can or will improve the throughput above 12 TPS. Unless the throughput of the encryption device is known, then performance degradations may manifest themselves as a software performance issue rather than a system integration issue.

A payment processing software provider, unless they dictate specific requirements for some or all system components, has limited control over the effect on performance of all but the application software. Software providers will normally make recommendations for these components but cannot predicate the performance of their software products on these recommendations being followed. Most payment applications, if they measure performance, do so from time of request message arrival at the application level to time of response message departure from the application level. Some may measure overall response time only; others will measure internal processing and external “wait” time separately.

Historically, in order to have some semblance of control over as many factors as possible, payment applications were written for specific platforms. Usually these platforms were proprietary fault tolerant systems whose cost/performance ratios were degraded by the inherent overhead of providing hardware- and system software- level fault tolerance. In these instances, performance – up to a limiting point – could be bought at a premium price. Frequently these fault tolerant platforms required special software design and coding techniques to properly and fully take advantage of the fault tolerance attribute. And if other components in the path such as communications routers and connections are not redundant, the premium paid for fault tolerance is all for naught.

Generally, it pays to initially set aside specialty hardware arrangement considerations and focus first on the payment application design itself. Pursuit of high performance using a poorly written application on a premium proprietary platform will be a truly expensive undertaking. When confining the performance issue to the payment application itself, there are a few key software attributes that contribute to the tangible performance characteristics of an online transaction processing (‘OLTP’) application. These key attributes are code path length, code efficiency, database design, and encryption approach.

Code path length is relatively objective and refers to the lines of code which translate eventually into the number of machine instructions executed in the process path of a transaction. Longer paths tend to produce longer response times and lower levels of performance and, obviously, shorter paths produce the converse.

Code path efficiency is more subjective and refers to the art (or science, if that is your viewpoint) of finding the logic design that requires the least lines of code to perform a particular function and the least number of functions to complete a transaction processing flow. Generally, but not always, the more experienced designer and coder will produce better software. However, for payment processing, the addition of the experience level of understanding the nuances of OLTP (some deign to call this “real-time processing”) in general and payments OLTP in particular is another efficiency factor.

Database design and how it affects any kind of application is a well-published subject that we do not need to cover here. Suffice to say, a poor database design or a poor implementation of it will have significant impact on an OLTP application which is sweating minute changes in the milliseconds that a process path takes. Again, OLTP and payments experience goes a long way toward subjugating database design as a performance issue. Simply stated, data reads and writes must be efficient and kept to a minimum.

Encryption of in-flight or stored card data is a necessary security step in processing a payment. Without a doubt, it is an expensive process often best relegated to an off-server “single purpose computing device.” However, there are some board-level implementations that do work well if they do not steal primary server CPU cycles. In either case, once again, OLTP experience will mitigate the risk of a poorly implemented encryption design.

Payment Processing Application Scalability

Scalability is generally defined as the property of an application to improve its performance due to a change in scale of its hardware environment. Commonly, the hardware change usually involves either faster processors or more processors. It may also include chip or disk memory components with higher transfer rates. For OLTP applications, the environmental impact zone expands to include internal and external connectivity as well as database considerations. It does no good to scale the application if the encryption, database or communications processes cannot scale accordingly.

Many will look solely to an application for scalability when, in reality, it is also very much a system integration issue. Conversely, any application, including an OLTP application, can be designed to be non-scalable. One simple way is to make the application single-threaded … pretty much the “kiss of death” for an OLTP payment processing environment.

Assuming an OLTP application is multi-threaded and has an efficient code path, where can scalability go wrong? There is a multitude of ways. For example, improperly or poorly configured network routers can overload one processing path while underutilizing another path. Poorly configured servers with insufficient memory or processor power will invariably lead to poor performance. Using server clusters incorrectly can lead to load balancing and reliability issues. If they are not set up properly, database servers will severely impact an OLTP application’s ability to achieve even marginal performance.

So, where does scalability really come from or how is it best achieved? Scalability fundamentally derives from the ability of an application to take advantage of a faster server or more servers or both. That means the applications will produce improved performance via increased transaction capacity, reduce response time or both. Assuming a reasonable design and execution, most applications (our earlier single-threaded example notwithstanding) will show linear improvements in performance relative to the change in server count and/or processing power. Running on multiple servers does require the application be replicable in some means or that the servers themselves provide the replication transparently.

The multi-server approach to scalability will eventually become both complex and expensive as the number of servers increases. Increasing the processing power of a server seldom, if ever, creates any additional complexity. And, for commodity servers, Moore’s Law of computing power (2 times power increase every 18 months) comes into play and the additional expense of more processing power is not going to be significant.

Another approach is using virtualization technology to create replication. A Virtual Machine (‘VM’) will, in most cases, create a veneer of replication even when the application is not conducive to duplication. However, the VM approach possesses a fatal flaw: it creates potential single points of failure for multiple instances of the application. This weakness can be mitigated by running the VM on a proprietary fault tolerant hardware platform or in some form of clustered environment. Obviously, this two-part approach adds additional hardware costs on top of the costs for the virtualization technology. And it is a complex solution that creates a number of integration issues to be resolved.

For any payment processing application, replication creates a number of integration and configuration issues. As additional copies of the application are created, communications connections between the application and encryption devices, terminals (POS, ATM, Mobile, Kiosk, etc.) and gateways must be replicated. As connections are replicated, a decision must be made as to whether these are real or virtual connections. Juxtaposed to those choices are the decisions made on the various connection types in regards to failure points and backup paths. If a single physical communication line outage takes out three virtual connections to three separate applications instances, then all three application instances are a single point of failure by default.

When factoring the communications connections along with encryption device and database connections and communications routers into the decision cycle, the complexity of the integration and configuration process increases exponentially. And we haven’t even begun to talk about the overhead of these extra connections. Stated simply, over-replication will often create more problems than it solves.

So, it is readily apparent that scalability for an OLTP application is far more than just a matter of tossing more and faster hardware into the performance pot. For OLTP applications in general and payment processing applications specifically, scalability is a delicate balance of server power and application replication. Knowing where that balance occurs comes from years of experience designing, supporting and managing payment processing environments. In another post, I will talk about how OLS created a practical solution for the performance and scalability issues for a payments processing application.

Tools of the Trade in developing Payment Systems

I was thinking about the tools and systems and general knowledge that I use on a daily basis, and thought is would be a good exercise to document them here:

Computer Systems:


  • A good text editor - UltraEdit, TextMate or vim
  • A good hex editor - UltraEdit, hexdump, hexedit or vim -b
  • ssh client - PuTTY or OS native ssh clients
  • svn client
  • NetCat - to pipe binary message dumps to simulators, and to create listening servers to accept message dumps.
  • Client and Server Simulators – write your own!
  • Calculator – the DEC to HEX and HEX to DEC functions are great for header lengths - if you are a real geek you have a HP-16c
  • Instant Messenging clients and Skype to communicate with your team.


  • Ability to read – seriously, read those specs!
  • ISO8583
  • TCP/IP Socket programming - both Client and Server
  • Database programming experience e.g. SQL - and other O/R Mapping tools - Understanding the I/O requirements between write intensive OLTP and read intensive Data Warehouse data stores.
  • Payment Processing 101 knowledge
  • Data Encoding techniques and character sets and numbering systems - ACSII, EBCDIC, BCD, Binary etc.
  • Basic understanding of Encryption, including symmetric, asymmetric, PIN encryption , PIN translation, and DUKPT
  • Basic understanding of IT Security
  • PCI and PABP/PA-DSS requirements - review the audit procedures !
  • How to use Google
  • Ability to read – Note: This is intentionally listed twice :)

Does your Payment Switch handle non-payment transactions and message formats ?

PseudoephedrineAs a financial payment switch and switch vendor, we need to be agile, adaptable and expandable to support different formats that are required by our customers. See The New Normal - for one of our more complex acquirer side integrations. One of these initiatives consisted of developing an interface to switch incoming MethCheck Pseudoephedrine Inquiry transactions from the Point-of-Sale to OLS.Switch, and then switched out to another end-point for further processing. While most message formats in the payment space follow the ISO-8583 standard, we do have many end-points that we need to interface with that are either fixed length or variable length, we call these FSD messages. In the MethCheck implementation we used a FSD based message format for request and response messages. Our role here was to pass a customer defined buffer from the POS system to the end-point, though our switch and pass a response buffer back to the POS system FSD Request


FSD Response


We have a very easy way of creating and populating these messages.

FSDMsg fsd = new FSDMsg (“file:cfg/meth-“);
FSDMsg msg = (FSDMsg) ctx.tget (REQUEST);

fsd.set ("0", msg.get("transaction-code"));
String storeNumber = msg.get ("store-number");
fsd.set ("41", storeNumber);

TranLog tranLog = (TranLog) ctx.tget (TRANLOG);
if (tranLog != null) {
   fsd.set ("46", ISOUtil.zeropad (Long.toString(tranLog.getId().longValue()), 19 ));

StringBuffer sb = new StringBuffer();
sb.append (msg.get ("register-logon-nbr"));
sb.append (msg.get ("meth-entry-mode"));
sb.append (msg.get ("meth-id-format"));
sb.append (msg.get ("meth-id-data"));
sb.append (RS);
sb.append (msg.get ("meth-person-info"));
sb.append (RS);
fsd.set ("meth-trans-info", sb.toString());

In less then a week’s time in development (testing and user acceptance testing will take longer) - we were able to add an interface to our switch to handle this non-payment transaction type.

Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now