PIN Block Formats - The Basics

I had a call yesterday about approved TG-3 (Which is now called TR-39 ) ANSI PIN Block Formats. The TR-39 Audit Procedures state that ISO 9564–1 Format 0 (ISO-0) and Format 3 (ISO-3) are the only approved formats:

4.1.3 X9 Approved PIN Block Formats

Documented procedures exist and are followed that ensure any cleartext PIN-block format combined with a PIN encryption process has the characteristic that, for different accounts, encryption of the same PIN value under a given encryption key does not predictably produce the same encrypted result. (Note: any cleartext PIN block, formats 0 and 3 meet this requirement, as specified in X9.8-1).

Reference X9.8-1 - Sec. 4(c), Sec. 6.2, Sec. 8.3.1, Sec.8.3.2, and Sec. 8.3.5

In case you are curious here are Visa’s PIN Security Requirements

Requirement 3: For online interchange transactions, PINs are only encrypted using ISO 9564–1 PIN block formats 0, 1 or 3. Format 2 must be used for PINs that are submitted from the IC card reader to the IC card. Other ISO approved formats may be used.

This requirement further states:

PINs enciphered using ISO format 0 or ISO format 3 must not be translated into any other PIN block format other than ISO format 0 or ISO format 3. PINs enciphered using ISO format 1 may be translated into ISO format 0 or ISO format 3, but must not be translated back into ISO format 1.

(This last paragraph addresses an attack on Pin Blocks that can be translated in to format 1 on a HSM which would expose the clear PIN)

Let’s take a look at a few Pin Block formats:

For our examples: P - PIN Number F - Hex 0xF A- Last 12 digits of PAN not including check digit R - Random Hex Character (0-9, A-F) Let us use the account number 4111111111111111 and PIN Number 1234 (examples use a PIN Length of 4 but could be 4-12 digits)

“Pin Pad” format or IBM 3624

PPPP FFFF FFFF FFFF our Pin Block 1234 FFFF FFFF FFFF Notes: Not allowed and is an old legacy method - not approved to be used.

ISO-0

04PP PPFF FFFF FFFF (0 = ISO-0 Format, 4 = length of PIN) XOR with 0000 AAAA AAAA AAAA (Formatted PAN) our Pin Block: 0412 34FF FFFF FFFF XOR 0000 1111 1111 1111 = 0412 25EE EEEE EEEE Notes: Introduces variability in the PIN block by XOR’ing with a Formatted PAN - Best practice is to use ISO-3 instead of ISO-0 as there are attacks against ISO-0

ISO-1

1412 34RR RRRR RRRR (1 = ISO-0 Format, 4 = length of PIN) our Pin Block: 1412 348D 665A C5A3 Notes: Introduces variability in the PIN block by using Random padding chars - Best practice is not to allow HSM’s to accept or use this PIN Block format. Not allowed by TR-39 but is VISA.

ISO-3

34PP PPRR RRRR RRRR (3 = ISO-3 Format, 4 = length of PIN) XOR with 0000 AAAA AAAA AAAA (Formatted PAN) our Pin Block: 3412 34C8 CBA4 285C XOR 0000 1111 1111 1111 = 3412 25D9 dAB5 394D Notes: Introduces variability in the PIN block by using Random padding chars and by XOR’ing with a Formatted PAN - Best practice is to use this format.

Sales Employment Opportunity Here

201003030904.jpg

OLS is in the market for a sales representative.

Please refer to our LinkedIn job posting for more details.

CaseSwitch - Source Port Routing

We have implemented a new component to our Java and jPOS fueled Payment Switch - OLS.Switch which we have called the CaseSwitch. The vast majority of our switching algorithms are based on either the determination of CardType - which dictates which outbound endpoint we send that transaction to, or on Card Bin Ranges. An example of a Bin Range:

BinRanges.png

If a CardNumber’s Bin or IIN - matches our Bin Range configurations - We will select the appropriate EndPoint. In this example if we have a VISA or MC Card we switch it out to a FDR Gateway. If we were connecting to a to MasterCard MIP or Visa VAP or DEX then we would have a MC and VISA EndPoint defined with our BankNet and VisaNet interfaces and switch the transactions to those endpoints. An example of a Card Type: We have certain transaction types that we know where they go because of their Card Type - Many of these are internal authorization hosts such as implementations of Authorized Returns, MethCheck, Loyalty, Couponing. Others are transactions where the transaction type also dictates the card type - such as those to GreenDot, InComm and other external hosts where a BIN Range lookup is unnecessary. Source (Port) Based Routing We recently had a requirement for Source-Based Routing - where depending on the source port that would dictate the outbound transaction path(s). In our Server we accept the incoming transaction and then place a Context varaible we call PORT that tells us which Server Port the transaction came in on. One we have that additional data we can perform a Logic Branch in our Transaction Manager that looks like this. This allows us to define transaction paths based on the incoming port of the server, so in this example.



<property name=”case 5001” value=”LookUpResponse Log Close Send Debug” />
<property name=”case 5002” value=”QueryRemoteHost_xxx Log Close Send Debug” />
<property name=”case 5005” value=”QueryRemoteHost_yyy Log Close Send Debug” />
<property name=”default” value=”Log Close Debug” />

Port 5001 - we perform an authorization locally Port 5002 - we switch out the transaction and reformat it to endpoint xxx - message format and interchanges requirements. Port 5005 - we switch out the transaction and reformat it to endpoint yyy - message format and interchanges requirements.

Signed Overpunch or Zoned Decimal or what are these weird characters in numeric fields ???

cobol.jpg

We interface to many different systems and sometimes we get to talk to IBM Mainframes or message formats that uses Signed Overpunch

Where we see numberic values like “100000{“ , “100999I”, or “100495N”

Signed Overpunch is used in order to save a byte the last character can indicate both sign (+ / -) and value.

These types are defined in COBOL Copybook this looks like:

S9(3)V9(4);

which equate to :

100000{ = 100.0000

100999I = 100.9999

100495N = -100.4955

Here is a snippet of Java Code that we use to handle this:

public static final char\[\] gt_0 = { 
    '{', 'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I' 
};
public static final char\[\] lt_0 = { 
    '}', 'J', 'K', 'L', 'M', 'N', 'O', 'P', 'Q', 'R' 
};

protected static String convertToCobolSignedString (String aString) {
int aInt = Integer.parseInt(aString);
char[] conv = (aInt >= 0) ? gt_0 : lt_0;
int lastChar = (int) aInt % 10;
StringBuffer sb = new StringBuffer (Integer.toString(aInt));
sb.setCharAt (sb.length()-1, conv[lastChar]);
return sb.toString();
}

Velocity Manager and Velocity Profiles

I recently put together a document that describes our Issuer implementation of Transaction Velocity Checks during the authorization process. We use a facility called the Velocity Manager to implement authorization rules that are based on frequency of transactions over a given time period. Velocity profiles can be used to implement extensible velocity-based logic.

Here is the data-structure that defines our Velocity Limits:

Velocity Profile.png

Here is a snapshot of a configured Velocity Limit based on Accumulated Transaction Amounts:

500 limit.png

Here is a snapshot of a configured Velocity Limit based on Transaction Counts over a given period:

10 txns.png

Continuous integration

Testing acquirer side implementations are hard. The incoming message formats and communication protocols from the card acceptors ( Payment Terminals, Point-of-Sale Machines, Store Controllers) are known and the endpoint’s message formats and communication protocols are also known. The challenge is testing and validating the translated incoming messages to various outbound endpoints, their communication protocols and message formats. Some end-points provide simulators (very very few) others will allow test access over leased lines and communication equipment over a separate test ip/port combination. This works well for our customers to perform user acceptance and certification -to these endpoints - this isn’t viable for regression and testing during phases and development before code delivery. We have solved some of this with various custom build response simulators that have basic logic - typically transaction amount prompted to provide alternating response messages. These response messages are built from message specs or are built from captured network traffic on test systems. We can only be sure we are simulating basic transaction types and request and response channels, however. Oh, and then there is always this problem.

Issuer side implementations are easier test - you can feed into the authorization host both simulated network and local transaction sets to test implemented authorization rules and other features.

testing.jpg

In 2009 we built and launched a new Issuing Payment Switch and tested it using Continuous Integration techniques. This system has 3 primary interfaces.

  1. Network - connected to an association’s network to recieve incoming transactions based on BIN ranges.
  2. Local - Card Management style interface - Manage Cardholder, Cards, and Accounts on the system - and allow local transaction sets to be performed.
  3. Flat File generation: Authorized File, Financial File, and a Card Status and Balances File. Flat file processing - clearing/settlement/reconciliation files.

Continuous Integration as defined by Martin Fowler:

Continuous Integration is a software development practice where members of a team integrate their work frequently, usually each person integrates at least daily - leading to multiple integrations per day. Each integration is verified by an automated build (including test) to detect integration errors as quickly as possible. Many teams find that this approach leads to significantly reduced integration problems and allows a team to develop cohesive software more rapidly. This article is a quick overview of Continuous Integration summarizing the technique and its current usage.

CI’s general steps:

  1. Maintain a code repository
  2. Automate the build
  3. Make the build self-testing
  4. Everyone commits every day
  5. Every commit (to mainline) should be built
  6. Keep the build fast
  7. Test in a clone of the production environment
  8. Make it easy to get the latest deliverables
  9. Everyone can see the results of the latest build
  10. Automate Deployment

Our CI model is based on an implementation that is scheduled multiple times a day - It checks out the code from our software repository, compiles it, builds a new database and schema and required setup data, starts our software services up - performs unit tests, shutdown the software services, and we receive an email and attachments that tell us if the code compiled and the results of the unit tests and which ones were successful and unsuccessful. The email attachments we receive contain the run log zipped of the transactions, and a unit test report.
Our Unit tests are written using the Groovy Programming Language and we leverage the TestNG testing framework. We act as a network client to our switch which was built and ran from the current source, and perform both Network and Local Side testing. The system is also setup using some of the Local Transaction sets, Here is a short list of a few of the transaction types:
Local:

  • Add Cardholder
  • Add Card
  • Add Account
  • Debit Account (Load Funds)
  • Set Cardholder/Card/Account Status (Active/Lost/Stolen/Suspended/etc)
  • Local Debit and Credits
  • Balance Inquiry
  • Expire Authorization
  • View Transaction History

Network:

  • Authorization
  • Completions
  • Chargebacks
  • Representments
  • Force Posts
  • Returns
  • Reverals

The combination of local and network transaction types are tested against various test cases.
If we setup a Cardholder with AVS information and run an AVS Authorization - do we get the expected results, and for each AVS result code ? Does an authorization on a statused card get approved ? Do transactions with amounts greater then, equal to, or less then the cardholder’s available balance get authorized or declined properly ? Authorization on a Card not Found ? You get the idea.
We build and test our issuer platform a few times a day - each developer can also run the test suite locally on their development environment, this ensures that future changes doesn’t impact existing functionality. On a test failure - relevant information in included in the autotest emails so issue discovery can be identified by our business analysts and developers without logging into test systems.
Oh, and Please don’t break the build ;)

10,000 TPS per second

I ran across Kiplinger’s article and picture of “The Credit or Debit Debate Visualized“ It is a very nice picture of both the usage of Credit and Debit Cards over time, as well as a nice list of pros and cons and differences between each, I encourage you to check it out for a good basic summary.

In payment systems I don’t really care as much about the type of card, Credit vs Debit at a basic level to me – one has PIN’s and requires usage of HSM’s and require real-time reversals and one uses clearing files and the other reconciliation files. But I digress.

At the bottom of the picture there are some statistics, Kiplinger being a personal finance website and not to mention various “Letters“, these are mostly consumer related, but his one caught my eye:

ABA-TPS.png

Which makes me chuckle because lots of prospects tell us they need a system to be able to support the worlds average TPS (Transaction Per Second), or a small fraction of.

The Freeze

2866522209_02285d877c.jpgThe “Freeze” is upon us in the Payments Space. Any and all system changes should of been made and time spend making sure your payments systems and infrastructure can handle the Black Thursdays, Fridays, and Holiday Season should be complete and in monitoring mode. Rather then expand more here - Andy Orrock wrote a excellent piece on this last year: Everybody Freeze!

In the Payment Systems World, we’ve now entered “The Freeze.” This is the industry-wide term associated with the period running from approximately the Thursday before Thanksgiving (which is the last Thursday of November in the US) through to about the second full week in January. During that period, we don’t do any production releases, unless it’s a fix of a critical nature. We advocate the same practice for our clients. We also recommend that they not undertake material changes to hardware configurations, databases, scripts, or any other piece of supporting or underlying technology.

Traditionally, it’s been a good time for us to focus on big projects that have a Q1 delivery date. We can stay under the covers and make some serious progress on those bigger initiatives.

We also send out a customer letter re-emphasizing how to get a hold of us. The letter stresses the importance of vigilance and watchfulness over key production systems. It reminds our clients that we’re Always Available. Our firm was founded on the back of 24x7x365 support of mission-critical production systems. We get paid to make sure our clients can – to the fullest extent possible – enjoy the holidays with their family knowing that we’ve got their back on support.

Why all the heightened concern? It’s the nature of payment systems: there’s tremendous upsurge in volume in the freeze period. If you’ve got a latent bottleneck laying dormant and ready to strike, the unfortunate reality is that it’s going to nail you right between the eyes on a killer day like Black Friday, Cyber Monday or Christmas Eve. We service some Stored Value authorization endpoints that get massive 20x surges in volumes on December 24th. So, you’ve got to be ready.

We work the other ten-and-a-half months of the year to make this month-and-a-half as uneventful as possible.

You don't know until you know (or go into Production)

images-2.jpgOver the last six months we have been busy building and implementing an OLS.Switch Issuer Implementation with one of our customers and their banking and payment processing partners. It has been a process of reviewing and implementing message specifications, business processing requirements, authorization rules, clearing, settlement, flat file and reporting requirements. We also filtering external messages into our IMF - Internal Message Format based on ISO8583 v2003, build an interface Card Management functions via our local API’s and message sets. Building client simulators and trying to faithfully reproduce what happens when you are connected to a real system.

Testing on test systems is the next step - replacing our client simulators with other “test” systems that are driven by simulators by the processing gateway we interfaced to. Those simulators have limitations - in their configured test suites or test scripts, some require manual entry to send original data elements for subsequent transaction types, (e.g completions and reversals). We generate clearing and settlement files and match those to on-line test transactions, and our use cases.

After on-line testing, you connect to an “Association” test environment to do “Certification” and run a week’s worth of transactions through a wider test bed. Then you are certified, your BIN goes live and then you enter a production pilot mode - where you watch everything like a hawk.

You can do all of the simulated testing for both on-line transactions and off-line clearing and settlement files that you want - when you connect to the real world and do your first pilot transaction that is where most likely you will see something that wasn’t simulated, tested, or even included in certification, it happens. You need to be proactive, set-up reviews and manual interventions, perform file generation when you have staff available to review the output before it is released for further processing.

What have we seen :

  • Test environments that are not as robust as production or not setup with up-to-date releases.
  • Certain real-world examples are hard to simulate - reversals, time-outs.
  • Thinly-trafficked transactions: (chargeback, representment)…people can’t even define these much less create them in test
  • Poor or incorrect documentation of message specifications.
  • You receive Stand-In Advices or other transactions on-line that you don’t see in testing or certification.

Production pilot is a very important phase of testing - It is where you discover and address the < 1% of issues nobody catches in prior project life-cycles. What can happen, WILL happen. What you think might be something that will occur infrequently will bite you sooner, not later.

Protect Debug Info Transaction Participant

jPOS-EE has a very handy transaction participant called “Debug” its main purpose is to dump the contents of the jPOS’s Context. While this is very helpful in test modes and during development - The context remains “un-protected” and all of the data remains in the clear. Even the ProtectedLogListener and FSDProtectedLogListener will not protect data in the context. Enter the ProtectDebugInfo Transaction Participant a configurable implementation I wrote based on some of Alejandro’s ideas, and one that lives in most of OLS’s payment products in various specific iterations. It’s configuration looks like: ProtectDebugInfo.png Protecting your q2.log in this truncated example:

account-number: ‘599999______0001’ 599999______0001

Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×