Onion - the missing ingredient for Sage Line 50 / Sage Instant accounts packs in Excel

Onion - the missing ingredient for Sage Line 50 / Sage Instant accounts packs in Excel
Full audit trails to underlying transactions from P&Ls, Balance Sheets, graphs, PivotTables, and departmental TBs in a stand-alone Excel file. Aged Debtors and Aged Creditor files too. Free 30 day trials. Download today at www.onionrs.co.uk

Sunday, 24 July 2011

Dealing with mixed data types in Excel ODBC queries (1)

There's a great post dealing with this at Daily Dose of Excel.  However, aware that I often don't have access to the registry on the machines I access, I was interested to find out if I could make queries work no matter what the registry contents*. 

The best situation to find yourself in is if your registry has an ImportMixedTypes value of Text.   This allows you to set IMEX=1 in the extended properties for the query.  If you set HDR=No as well, the text column headers that would be the field names if HDR=Yes, ensure that the query will return text values for every field.  Criteria can suppress the display of the record containing the column headers.  Data conversion calculations provide the output of the query in the appropriate data type on all records.  Using this methodology all the original data will be in field names F1, F2 etc.  The data conversion calculations performed in SQL can alias the field names to reflect the original field names.

Below is a select statement to demonstrate how to return each data type in the above scenario:
SELECT iif(isnumeric(F1), cdbl(F1)) as [Number],
iif(isnumeric(F2), ccur(F2),0) as [Currency], 
iif(isdate(F3),cdate(F3),#1900-01-01#) as [Date],
''+F4 as [Text],
iif(F5='TRUE',cbool(1),iif(F5='FALSE',cbool(0))) as [Boolean]


The following where clause ensures that the column header record is suppressed:
WHERE F4<>'Text' or isnull(F4)

In the above the currency column populates every record with a value, effectively converting nulls to zero.  Vital if any GROUP BY operations are to return a value other than null.  For comparison, the number column doesn't replace the nulls. Below is the QueryTable output:

NumberCurrencyDateTextBoolean
            -   01/01/1900Other fields are Text in the original data
            -   01/01/1900Other fields are empty in the original data
1234     567.89 30/05/2011-1
56       12.99 23/07/2011The line above is empty0


If you find that your ImportMixedTypes=Majority Type, then IMEX=1 has no practical effect.  If a mixed type field is determined to be a text type, all values of other types will be suppressed.  The only way to guarantee that the proper values are returned for all fields is to include at least half the number of rows specified by TypeGuess rows just below the column headers in the data.  These rows can be hidden once populated with the correct types of data.  This will ensure that the Majority Type identified is the correct type. With HDR=No, criteria can suppress the display of the record containing the column headers as well as the records contained in the now hidden data rows.  Data conversion calculations provide the output of the query in the appropriate data type on all records.  Using this methodology all the original data will be in field names F1, F2 etc.  The data conversion calculations performed in SQL can alias the field names to reflect the original field names.

Below is a select statement to demonstrate how to return each data type in the above scenario:
SELECT iif(isnumeric(F1), cdbl(F1)) as [Number],
iif(isnumeric(F2), ccur(F2),0) as [Currency], 
iif(isdate(F3),cdate(F3),#1900-01-01#) as [Date],
''+F4 as [Text],
iif(isnumeric(F5),cbool(F5)) as [Boolean]


The following where clause ensures that the column header record is suppressed in addition to the records contained in the now hidden data rows (populated with '0'):
WHERE F4 not in('Text','0') or isnull(F4) 

The QueryTable output is identical to that shown above.


* the one exception is if the registry has TypeGuessRows=0 (scan all rows) and ImportMixedTypes=Majority Type - an unlikely scenario.

Tuesday, 31 May 2011

Passing multiple values in a single Cognos Report Studio value prompt

I found a way to pass multiple value combinations through in a single Cognos Report Studio static choices prompt.  Here I describe the generic approach using just two values passed in each available choice.

In a value prompt (P_Prompt) static DisplayValue and UseValue choices are defined as:

DisplayValue   UseValue
Choice 1:          ~1[Prompt value 1],~2[Prompt value 2]
Choice 2:          ~1[Prompt value 2],~2[Prompt value 3]

The UseValues comprise two Member Unique Name (MUN) tokens separated by a comma and identified with a leading ~1 or ~2 (which are unique character combinations).

The key to the approach is the macro expression #join(',',substitute('~[n]','',grep('~[n]',array(split(',',prompt('P_Prompt','token'))))))# which returns the [n]th token identified in the [n] value choice made.

Here’s a description of how the macro works from the inside out.  The UseValue chosen in response to the prompt is returned as a token. The split macro creates multiple tokens from the single token returned using a comma as the delimiter.  The multiple tokens are placed in an array using the array macro.  The grep macro identifies the single array element containing the ~[n] search item specified and the substitute macro removes the ~[n] identifier from the token.  The join macro converts the array item (now only one remaining) back into a string for use as a MUN.

If I choose the DisplayValue “Choice 1” when running a report, the UseValue “~1[Prompt value 1],~2[Prompt value 2]” will be available to the underlying report.  The macro calculation #join(',',substitute('~1','',grep('~1',array(split(',',prompt('P_Prompt','token'))))))# will return [Prompt value 1] within the report whilst #join(',',substitute('~2','',grep('~2',array(split(',',prompt('P_Prompt','token'))))))# will return [Prompt value 2] within the report.

Similarly, if I choose the DisplayValue “Choice 2” when running a report, the UseValue “~1[Prompt value 2],~2[Prompt value 3]” will be available to the underlying report.  The macro calculation #join(',',substitute('~1','',grep('~1',array(split(',',prompt('P_Prompt','token'))))))# will return [Prompt value 2] within the report whilst #join(',',substitute('~2','',grep('~2',array(split(',',prompt('P_Prompt','token'))))))# will return [Prompt value 3] within the report.

Maybe others have found another way to achieve the same result.  If so, I’d be interested in how you’ve done it.

Sunday, 27 March 2011

E-reconcile for everyone in Sage Line 50 / Sage Instant

I was using the manual bank reconciliation feature in Sage Instant Accounts the other day and found it to be a bit laborious marking cash book transactions off one by one.  I don't subscribe to the file transfer module offered by my bank (for an extortionate amount of money) so I thought the e-Banking, e-Reconcile option was out. 

However, then I remembered some work I'd done some considerable time ago where bank accounts with tens of thousands of transactions had to be reconciled in a different software package.  The only statement information available was a csv file containing statement date, reference and amount.  I constructed an Excel macro to create a statement file similar to the ones produced by the bank's file transfer module (http://www-2.danskebank.com/Link/CAPDBUK/$file/CAP_DB_BRAND_UK.pdf) and, hey presto, I was able to import the necessary tens of thousands of bank statement transactions and get to work. 

Thinking about my Sage situation I realised that an ordinary set of transactions saved from my basic electronic banking screens will give me a csv file containing statement date, reference and amount. Consequently, I dusted down the old Excel routine and ran a month's statement transactions through it. I enabled e-Banking within Sage and imported the statement details using the file I'd created.  Now I am able to use the auto reconcile features within Sage!  Nice. Even better, I find that, with a bit of pre-processing in the Excel routine, I can maximise the "full match" (reference and amount) success rate and cut down manual matching to a minimum.

It should be possible to use this methodology for any bank account, so I'm planning to implement this approach for all Sage bank accounts where file transfer facilities don't exist with the bank.  I like to have software do as much work for me as possible so this fits in with my overall stratregy pretty well.

P.S. if you use Sage you may also be interested in this Excel reporting tool I've written - www.onionrs.co.uk. A video is available on YouTube.

Tuesday, 26 October 2010

Conditional SQL in Cognos by using prompt values to control line commenting

Here's a technique that allows the omission of certain sections of SQL in response to prompt values in Cognos Report Studio:

1 #join('|',substitute(prompt('Prompt','token','Excl'),'',grep(prompt('Prompt','token','Excl'),array('Incl', 'Excl /* '))))#
2 And column_name is not null 
3 #join('|',substitute(prompt('Prompt','token','Excl'),'',grep(prompt('Prompt','token','Excl'),array('Incl', 'Excl */ '))))# 

The prompt named Prompt has two values, Incl and Excl.

When the prompt value is 'Excl' the grep macro in line 1 returns the array element 'Excl /* '. The substitute macro replaces the array element with an array element containing ' /* '. The join macro converts the array element to a  string. /* is used as the start of a comment block in Cognos.

Line 3 resolves to */ when the prompt value is 'Excl' thus commenting out line 2 entirely.

Friday, 1 October 2010

Functions available with Text File ODBC driver

I can't find documentation anywhere.  The functions available seem to be mainly VB Scipt functions. This is my work in progress list and some comments thereon.  I refer to VB Script documentation for questions on syntax.

http://msdn.microsoft.com/en-us/library/3ca8tfek(VS.85).aspx


Use [ ] or `` with field names.  (` is the character on the key to the left of the number 1 at the top of the keyboard)

isnull() returns true or false. It is a good idea to use iif(isnull(<field>),0,<field>) to set a value for null entries if you might ever want to perform aggregation on the field.

mid()
left()
right()
strcomp()


Concatenate using + e.g. 'Dropped'+' '+'Pennies'

Use single quotes (not double quotes) for text.

year('2009-12-31') = 2009
month('2009-12-31') = 12
day('2009-12-31') = 31

dateadd('yyyy',1,`Date`)
dateserial()
datevalue()
now()
weekday()

isdate()
isnumeric() (watch for nulls)
isempty()

len()
lcase()
ucase()
trim()
ltrim()
rtrim()
string()

chr()
asc()
instr()

rnd()
str() converts number to string (leaves space for minus)
space()

sgn()
abs()

round()
CBool()
CByte()
CCur()
CDate()
CDbl()
CInt()
CLng()
CSng()
CStr()

timer()
typename()

Creating virtual records with Text File ODBC driver

Sometimes it can be useful to be able to add a virtual record to a recordset derived from a text file without actually adding a physical record into the file.

With the text file ODBC driver the table name can be omitted such that 

SELECT 'Hello world!' as [Greeting]

results in

Greeting
Hello world!

Consider a names.txt file with first_name and surname columns. You could add John Doe onto the query output by doing something like

SELECT * FROM [names.txt]
UNION ALL
SELECT 'John' as [first_name]
, 'Doe' as [surname]

CurrencyPosFormat and CurrencyNegFormat settings in Schema.ini

There are 4 possible values for CurrencyPosFormat in a schema.ini file:
  • Currency symbol prefix with no separation ($1)
  • Currency symbol suffix with no separation (1$)
  • Currency symbol prefix with one character separation ($ 1)
  • Currency symbol suffix with one character separation (1 $)
However, they are not specified as ($1) and so forth, they are specified with an index number between 0 and 3
i.e. CurrencySymbol=£ and CurrencyPosFormat=1 will result in a setting of 1£.

There are 16 possible values for CurrencyNegFormat in a schema.ini file:
  • ($1)
  • –$1
  • $–1
  • $1–
  • (1$)
  • –1$
  • 1–$
  • 1$–
  • –1 $
  • –$ 1
  • 1 $–
  • $ 1–
  • $ –1
  • 1– $
  • ($ 1)
  • (1 $)
However, they are not specified as ($1) and so forth, they are specified with an index number between 0 and 15
i.e. CurrencySymbol=£ and CurrencyNegFormat=15 will result in a setting of (1 £).

So far as I can tell CurrencySymbol=£ and CurrencyPosFormat=1 will cope with either 1£ or just 1 in a data file.  However, writing to a file with CurrencySymbol=£ and CurrencyPosFormat=1 specified will result in 1£ format for currency output.

I hope this saves someone some heartache.

By the way, I've been unable to get the CurrencyThousandSymbol in schema.ini to work at all.  I've given up trying at the moment.  I'd love to hear if anyone else gets it working.

P.S. It's CurrencyThousandsSymbol not CurrencyThousandSymbol! The Microsoft documentation is wrong! This works for reading files but not writing them. I'll keep looking at this from time to time.  BTW the international thousands separator (a non-breaking space) doesn't seem to work.  Again I'll report back if I find otherwise.