Tuesday, August 13, 2024
Kscope24 Speakers' Awards
Wednesday, June 5, 2024
Happening Now: ODTUG Board of Directors Election
Friday, May 31, 2024
FCC - Showing Calc Status on a Report
Friday, May 24, 2024
When was the last data load?
I had a request recently for a simple way for users to know when the last data load occurred. They could go into process details or jobs, maybe, but was there a simple way to check from within a report, data grid, or even Smart View. There may be something better, but here's what I did.
I created an account called LL_DateTime. I originally was going to put the year in one account, month in another, etc. but I ended just using the one account.
Within data management (this part isn't in the new interface yet) I created a logic group to duplicate a record. I picked a retained earnings account for one of the primary legal entities so there would always be a data point to duplicate.
After assigning the logic group to the location, I used a SQL map to change the amount. The two SQL functions in use here are SUBSTR (get a subset of a string) and SYSTIMESTAMP (date and time on the server).
The SYSTIMESTAMP returns a format of 28-MAY-24 02.34.56.654321 PM -00:00 which isn't something we would be able to load as an amount. So SUBSTR is used to pull out pieces of this and to combine them into something useful. There are a few components that need mapping. The month of May needs to be converted to a number. The AM/PM indicator needs to be converted to a 24 hour clock (again, trying to load the date and time as one amount. There are a couple of different WHENs in use to direct the calc (if May, then 05; if PM, then add .12 [12 hours]). Admittedly there is probably a more elegant way of converting the month to a number without having to enumerate all of the options.
So, the format of the output looks like this: 20240528.1434. This is an easy number to load and report.
The last part of the output (00:00) indicates the timezone. In this case, it is UTC. This could be converted to a specific timezone but in the case of worldwide users, most people would have to convert anyway. It's better to just use UTC and let each user deal with summer time, etc.
The last part of the puzzle here was to create a simple data grid with all of the point of view members locked except for the year and period. This way users can check and see when the data was last loaded. They can also replicate the point of view and use Smart View, etc.
Friday, April 19, 2024
EPM Cloud Data Integration - Dealing with [Parent].[Child]
Another day, another solution to remember for later.
When moving data from one application to another, shared members can sometimes flow through the integration with [Parent].[Child] notation. If loading the child member to the target application and the target does not have the same hierarchy, then this won't work. Or, like I did today, when loading entity elimination data from FCC to Planning, then only the parent entity is needed.
Thursday, March 28, 2024
FCC - Using Entities that are Shared in Insertion Rules
When working with insertion rules, sometimes there is a need to calculate something only for specific entities. But, if that entity is shared - in other words, exists in multiple hierarchies or has multiple owners - there can be problems. Here's how to deal with this common problem.
The need most arises with IF statements. The code usually looks something like this:
IF(@ISMBR("331"))
do some calcs
ENDIF;
This code will pass validation and deploy successfully, but when trying to consolidate, an error is generated.
Clicking the link and drilling into the error message will show this:
Unable to resolve duplicate member name ["331"] on line ____
The problem is the rule doesn't know whether to run for [ParentA].[331], [ParentB].[331], etc. One workaround would be to write the IF statement like this:
IF(@ISMBR("[ParentA].[331]"))
But if the entity gets moved, then that creates another problem.
The solution is to use the @EQUAL function. @EQUAL searches the outline for the specified string within a specified dimension. The result looks like this:
IF(@ISMBR(@EQUAL("331","Entity")))
If an additional entity existed like 331abc that should not be included, then add an additional condition to the IF statement to exclude that entity.
Thursday, March 14, 2024
FCC Constant Currency
An occasional requirement for an FCC implementation is to translate data at multiple currency rates. There's always actuals at the actual rates, but then you can translate actuals at last year's rate, actuals at the forecast rate, actuals at next year's budget rate, etc. These different translations provide the data to show whether operations performed well against the comparative data or did the exchange rate just work in their favor.
There are a couple of different ways to set this up. The first question is whether to use a scenario or a custom dimension to hold the translation versions. If the requirement includes the balance sheet, is used occasionally, or the customs are already used, then one or more scenarios is the recommended direction. In the case I'm showing here none of those apply, so this example will use a custom dimension. The requirement is for income statement only, the output is used extensively during close and subsequent reporting, and a custom dimension is available.
After creating additional members in the custom dimension to hold the results, the next step is to create a custom translation rule (as shown below). The top half of the rule shows where to source the exchange rates. In this case, the rates are sourced from scenario Actual for the prior year. The "current period" is specified, so Dec 23 actuals will translate at the Dec 22 exchange rates, Nov 23 at Nov 22, etc. The second half of the screen controls the source and destination members. The left side shows the source data: no constant rate (custom), descendant accounts of net income, and only applies to the actual scenario. On the right hand side is any redirection (target) members. In this case, the results are stored in LastYrRate (so No ConstantRate is translated and then stored in LastYrRate). The account and scenario dimensions do not need redirection, so those fields are left blank so that those members simply pass through.
Wednesday, January 17, 2024
Data Exchange - DRCRSplit for Delimited Files
Recently had to solve a problem during an implementation and as I've done before, I'm adding the solution here so I can find it again.
An incoming trial balance file is delimited and has two columns for the amount: a debit column and a credit column. Each record will have a value in one of these but not both. If the file was fixed format instead of delimited, there is a built-in import format expression called DRCRSplit that will deal with the two columns. With the function, you specify the midpoint of the overall character count but with delimited files you don't have a consistent midpoint character count, so it doesn't work with delimited files.
Like maybe some of you reading this, I did a Google search on this topic and found multiple Oracle forum posts, CloudCustomerConnect posts, etc. asking the same question over several years but didn't find an easy solution. There was a posted answer that combined the DRCRSplit function with the Column function, but as pointed out by someone else that solution causes data problems. So, below is what I did. There may be a more elegant way of solving this problem, but the solution worked for me.
I mapped the debit and credit balance columns to separate attribute fields and for the regular amount field, I mapped the account field to it. Each record needs a numeric amount; otherwise, it will get dropped on the import. I'm thinking the year could be used as the placeholder amount if it were available on each record. I'm guessing that one of the amount fields would probably work in conjunction with the no zero suppress expression.
[Editor note: if the right sidebar is covering part of the picture, click it to see the full detail]
Next, I used a SQL map on the entity to (a) pass the entity through and (b) to populate the amount. Below, the entity portion isn't shown. The AMOUNTX field (the target amount) is populated from the debit column if it is not equal to zero and from the credit column if the debit is equal to zero. Anything placed in an attribute field is stored as a string, so the TO_NUMBER function converts the string back to a number. If there was a need to deal with quotes, spaces, commas, etc. in the number as thousands separators or decimals, that could be addressed with either a format on TO_NUMBER or using one of the REGEXP functions.