Thursday, July 31, 2014

What are SAP Streamwork and SuccessFactors JAM?

They both are part of SAP Social Software. They both provide environments for enterprise collaboration between workforce to drive cooperation while sharing data.

SuccessFactors JAM - Is social networking collaboration platform within enterprise from SAP. Basically it is kind of “facebook” type of thing platform for enterprises. It enables collaboration of employees between companies and their partner companies while they all work on joint projects sharing applications, processes etc. Features of the platform span from discussions, notifications, follow-ups, creating (micro/blogging, wiki), and sharing content, organizing, tasking, learning and all it is provided in “feed updates” like user interface. Main goal is to engage workforce and boost their performance. The Jam platform supports mobile devices in large extend. From its competitors point of view we can say that JAM is like Microsoft’s Yammer. Jam got into SAP’s portfolio while they acquire SuccessFactors in 12/2011.

SAP Streamwork – then there was Streamwork. It (was)/is solution homegrown in SAP. We can say that it has broader scope then SuccessFactors JAM due to a lot of integration to SAP’s backend systems like ERP, CRM, SRM, BW. And that’s one of main powers of Streamwork – integration.

So guess what happened during 2012 while SAP was integration SuccessFactors as its daughter company. They integrated both products into one as well. It is called SAP JAM as their platform for social software. The JAM’s underlying platform seems was taken from SuccessFactors while a lot of features from Streamwork were built into it.

PS: SAP started to evaluate SAP Jam platform also for their Massive Open Online Course (MOOS) initiative open.sap.com. The very first course utilizing Jam platform is rapid implementation of Predictive Analytics with SAP HANA.

Useful links:

Magic of unwanted 2

Here’s again short post on magic of unwanted topic. This time about naming of variables which turned to be something which developer didn’t (or did) wanted J
















Other posts on the same topic:

Wednesday, July 30, 2014

Comments on SAP IDES

Recently I worked on few installations of SAP IDES systems. In this post I’d like to sum up important information on the IDES systems. 

In general the IDES is demo system of SAP software. IDES means International (or Internet) Demonstration and Education System and basically it models artificial company SAP Model Company or BestRun (e.g. company code 0005- 0005) who choose to implement SAP. The system has already a lot of customization and data available right away after its installation. How the customizing/data were basically prepared? The system is copy of SAP’s internal demo system. As it is used for demo and training purposes also by SAP there are many preconfigured clients with the data.

IDES systems come in 2 flavors:

Cross-Industry Systems - are covered by following solutions: ECC, SCM, CRM, BW, SRM
Industry Systems – are presented by following ones: Retail, Defense and Public Security, Mill Products, Consumer Products, Banking


What about new version? Can I upgrade my IDES? There are no specific patches (support packs - SP) for IDES. However manual patching with SP form regular releases for particular solution may be possible.

Download of IDES is possible like every SAP software through SMP: service.sap.com/swdc

More information:
799639 - IDES - General Information about the usage of IDES systems

My other blog post on IDES topic:

Difference between 1st and 2nd global declarations in BW routines

Savvy BW developer may notice that there are two areas when it comes to global data declaration in transformation’s routines. Start/end (and also Expert) routines are quite heavily used within the transformation. The routines are generated as per templates. While coding in ABAP we following areas reserved for us:

1st area for ABAP code:
*$*$ begin of global - insert your declaration only below this line  *-*
... "insert your code here
*$*$ end of global - insert your declaration only before this line   *-*

2nd area ABAP code:
*$*$ begin of 2nd part global - insert your code only below this line  *
... "insert your code here
*$*$ end of 2nd part global - insert your code only before this line   *

3rd area for ABAP code:

*$*$ begin of routine - insert your code only below this line        *-*
... "insert your code here
*--  fill table "MONITOR" with values of structure "MONITOR_REC"
*-   to make monitor entries
... "to cancel the update process
*    raise exception type CX_RSROUT_ABORT.


*$*$ end of routine - insert your code only before this line         *-*


















While purpose of 3rd area is clear – it serves for real code which encapsulated the business logic of routines in case of other areas it is not that clear. The 3rd one is actually where the routine begins. It is either end_routine, start_routine or expert_routine METHOD begins. Why they are two areas for data declaration? If we have a look into SAP documentation available here or here we can found out:










This would suggest that: if data is declared in the 1st area then the data is available across all datapackage. If the same is declared in the 2nd area then the data is only available for the actual package. But this may not be really true.

Let’s see what else we can say about first two. One of theories to solve this can be that 1st area is used for data declaration according ABAP OO paradigm. 2nd one would be used for data declaration of pre-OO (or non OO) ABAP standards. But this again may not be true.
According SCN post in forum available here there was someone who got back to SAP with regards this mystery. If we can trust this post here’s what SAP said:

In the first global part you can write your declaration or code you want to be able to reach globally in the transformation.
The 2nd global part will be used for those transformations which are migrated from an update or transfer rule. Routines used there will be automatically generated into the 2nd global part.

Sunday, July 13, 2014

Time zone of SAP application server

Time zone of SAP system is very important settings.  When we compare particular times it only makes sense when all compared times are in the same time zone. SAP system would normally inherit time zone from operating system. However it can be customized in TA STZAC. Particular value of time zone customized in this TA is valid for all system’s clients.


Once system time is customized all conversion to e.g. local time of user or conversion to any other time zone are done by converting of system time to UTC and then to desired time zone.

Also ABAP commands:

CONVERT DATE dat
        [TIME tim [DAYLIGHT SAVING TIME dst]]
        INTO TIME STAMP time_stamp TIME ZONE tz.
CONVERT TIME STAMP time_stamp TIME ZONE tz INTO [DATE dat]
             [TIME tim] [DAYLIGHT SAVING TIME dst].

Are using such a conversation. Notice that time zone for those ABAP statements need to be defined as type of TZNZONE which refers to table TTZZ for possible values.

Tuesday, June 17, 2014

Upload/download of file from/to SAP Application Server and Frontend

There is many times necessity of upload/download data between use’s frontend and SAP’s application server. If it is a high volume of files usually we use FTP tool. However from end user perspective if it is a case of few files there are other ways of how to do it.

We can create simple ABAP programs for the users. In such a program a Function Modules like GUI_DOWNLOAD and GUI_UPLOAD. 

Also we can reuse ABAP classes like CL_GUI_FRONTEND_SERVICES with its methods: GUI_DOWNLOAD   and GUI_UPLOAD.

However much easier method is to use standard transaction codes:


CG3Y - copying from app server to frontend











CG3Z - copying from frontend to app server




Monday, June 2, 2014

Logical system name has been changed for this system

This is again very common error very often seen in newly installed or copied BW systems. Similarly as it with “You can only work in client 001” error message.

Issue here is that BW’s myself system is not properly customized. It is within table RSBASIDOC which carries assignment of source systems to BW systems and IDoc types that are used for connection between the  systems.

Here’s full error message:


Logical system name has been changed for this system Message no. R3206

Diagnosis
The logical system name of this system is EH6CLNT800. However, this system was originally created with the logical system name T90CLNT090. It is not permitted to change the logical system name, as connections to other systems will be damaged beyond repair.

System Response
The transaction is canceled.

Procedure
Change the name of the logical system (table T000) for client  back to T90CLNT090. This enables you to continue working with the system.

Note: If you really want to change the logical name, read the information in SAP Note 886102.


To solve this it is necessary to adjust data in table RSBASIDOC. Basically error is triggered by FM RSA_LOGICAL_SYSTEM_BIW_GET. While the FM is checking system type of myself (field SRCTYPE = 'M' in the same table) there was an error.  Logical system name of the system which corresponds to myself in the table RSBASIDOC is not the same as BW logical system in table T100-LOGSYS.

How to correct this:

You can use FM RSAP_BIW_DISCONNECT which removes old/invalid value of myself BW system from the RSBASIDOC table. Afterwards you can use RSAP_BIW_CONNECT to get the table RSBASIDOC populated with proper logical name. If those FM do not help do deletion directly of M entry from the table RSBASIDOC. Then run FM RSA_LOGICAL_SYSTEM_BIW_GET which recreates new/valid entry for the same.

You can only work in client 001

You may encounter this error message in freshly installed SAP system. Also newly copied systems are suffering with this message. The message pop-ups while we attempt to run TA RSA1. The only thing that it tells us is current client is not the one that is indented to run BW.


Solution is simple; a proper client needs to be introduced into one of customizing tables. The table is RSADMINA and field is BWMANDT. Just make sure that client where BW is supposed to be is the same as value of this field. Afterwards TA RSA1 is working.


Monday, April 14, 2014

Heartbleed – bug in OpenSSL, is SAP affected?

Last week a quite bug blast of Heartbleed bug started over the internet and major media. The bug is serious vulnerability (CVE-2014-0160) within OpenSSL cryptographic library. Issue is causing an access to (web) server using OpenSSL library. Allowing potential attacker to read memory and by this gain information that it is not intended to be provided. To see how what Heartbleed bug really is refer here.  There are thousands of servers using the library out on internet. Heartbleed bug has an impact on enterprise software as it is very popular within enterprises as well; SAP software including.

Most of SAP solutions are not using OpenSSL library but they use SAP Cryptographic Library (it is called CommonCryptoLib in most recent releases). As per SAP statement on SMP’s security page there are no indications that major products like NetWeaver or HANA are affected. However investigation is still ongoing. In case of BusinessObjects solution there is even SAP Note2003582 – How does The Heartbleed Bug (OpenSSL vulnerability) affects SAP BusinessObjects Xi3.1 and Business Intelligence products 4/4.1“ provided. The Note discusses several BusinessObjects solutions. As per the note BusinessObjects is not affected unless customers do not enable SSL using APR in native tomcat library.

I would suggest to watch SAP updates on this topic e.g. via Security Notes.

For full coverage of Heartbleed bug see following sites:

Line Item Dimension Flag

There are some possibilities of how to improve performance of BW’s infocubes. One of them is to flag particular dimension Line Item Dimension. This can be done for dimension where there is exactly only one characteristics assigned into it. This is so called degenerated dimension. Doing this no dimension table is created. So SID table of that characteristic is acting as dimension table. Then there is fast access to the data as no real dimension table is present. As the model is simplified loading into that dimension is faster as no IDs for dimension table are generated. 











There might an issue arise when you do some changes. The changes can be related to adding or deleting the IOs from such an Line Item Dimension. It is clear that no other IO can be added to such a dimension. However let imagine that I want to deactivate that flag and add other IO. For some reason this may not be possible.  Or I want to get rid of whole dimension but system tells me that I have to remove IO first and it is again not possible.

In such a cases I removal of the flag can be done won database level. The flag as itself is stored in following table. The table is called RSDDIME (used in DB view RSDDIMEV) and field is called LINITFL -> Line Item Dimension. By removing the flag = X particular dimension can be deleted form the cube.



Disclaimer: Notice the blog post discusses activities done in debugger while changing the values of table fields. Such activities are usually not supposed to be executed. Bear in the mind that you may cause serious harm into your system. If you decide to proceed with it do it only after real understanding of all consequences and only on development and/or test system.

Sunday, April 13, 2014

SAP BW Easy Query

Concept of Easy Queries in SAP BW was introduced in BW 7.3 particularly in SP05 although they were available but not fully functional in lower SPs like SP02. Another prerequisite to use Easy Queirs is to have BEx Query Designer with revision of level 671 or higher.

What is SAP BW Easy query? It allows external access to the BEx queries. By the external access it is meant here that the BEx query can be consumed by web service. There are other interfaces that can be used to connect 3rd party tools to BEx queries, such as: OLE DB for OLAP (ODBO), OLAP BAPI (Business Application Programming Interface), XML for Analysis (XML/A), OData and finally Easy Query.

First of all such query needs to be enables for this access. This can be done in BEx Query Designer.  Follow to query properties and on Advanced tab page for the query there is a check box in section “Release for External Access” called “By Easy Query”.


Once the query is enabled for Easy Query BW system generates configuration needed to access it in BW’s background. This in particular means that SOA configuration profile is generated. To see generated objects you can use Easy Query Manager (transaction EQMANAGER) in the BW system. Note that transaction is webdynpro JAVA based (wda_eq_manager) so it runs in your web browser via HTTP server built in NetWeaver.






Afterwards the query can be used as a SOAP service. There are certain limitations in this scenario. See full documentation here.

One of common error which you can encounter while setting flag of easy query in BEx QD is that such a query will fail while attempt to transport it. There will be following error while import phase of the transport:

The easy query is wrongly or incompletely configured Message no. BW_BICS_EQ029


The easiest way to solve is to uncheck the flag and recollect the query into new transport request. However if your intention was to use the query for easy query purposes you need to check the query’s generated objects in TA EQMANAGER.

Thursday, April 10, 2014

Scheduling Process Chains with restrictions

Usually large SAP BW landscapes are using external tools to schedule its process chains which are loading the data and doing other activities needed to keep BW system up and running. Reason why those 3rd party scheduling tools exists and are used is that they obey limits of SAP’s standard job scheduling. The most known tools are Batchman, Control-M, UC4, Redwood, etc. But sometimes it is not necessary to turn to external tool and it is more convenient to use standard functionality.


While scheduling process chains there are a few functions which are not very known. On Start Time dialog box we have usual stuff which is same as in every other SAP job. Moreover there is a Restriction pop-up available. 

This can be used for cases where we schedule particular process chain to be executed on periodic basis but with some exceptions.  With utilizing SAP’s Calendar (TA SCAL) we can setup e.g. days on which chain should not be kicked off. By using these restriction on calendar we can implement scenarios like do not execute the chain on weekends/holidays etc.



Monday, April 7, 2014

Automatic restart of failed PChain step and other PC improvements

Recently I ran into the SAP Note called: 1915483 - Process chain enhancements. The Note is dealing with improvements of Process Chain which were delivered by SAP via ideas given by SAP customer. The program which allows SAP customer/partner/etc. to submit their ideas is more described here: How to influence SAP.

Following situations which may occur within PC are addressed by the note:

1. Automatic restart of PC’s step – Once particular PC got stuck within its process the only way hoe to restart the process is manual interaction by administrator.  There are situations where system could attempt to restart the step by its self. E.g. target of data is locked by other load.

To implement this function you need to set this "automatic restart" properly into particular process. The property is called "Automatic Repetition". You can set it via context menu of the process. It has following two parameters: Seconds: min. time the process waits before restart, No. of Repetitions: max. number of restart’s attempts.

2. Automatic reset of previous failed PC’s run – Once PC is executed it may have its previous execution failed. This function removes failed instances of previous PC’s run.

To use this function go to your PC switch to Change mode and in menu "Process chain" go to item Attributes and choose option called "Reset Previous run" and set checkbox "Automatically reset failures in previous run".

3. Ability to stop PC by single click – The function un-schedules all pending jobs and kills all loads (via IPs or DTPs). This can be already done via custom programing with utilizing PC’s API (see all PC’s API function modes here). Now this function is available via PC’s maintenance user interface (e.g. TA RSPC or RSPC1).

To use this function while you monitor the PC go to menu "Execution" and choose item "Stop current run immediately". You can do the same in log view or from the planning view.

4. Alternative to temporarily skip PC’s step – by this you can skip the particular step within the PC.

To use this function go to context menu of PC while in plan mode and choose option "Skip Process".

How to install the Note?

Either by implementing SP (e.g. in case of SAP NetWeaver BW 7.31 it is SP10) or manually – following instruction in the note.

Enjoy these functions! I found them very useful.

Sunday, April 6, 2014

Performance optimization: how to check if index is used?

While developing complex BW transformations where the data manipulation is involved using ABAP - performance is crucial.  The transformations must perform as fast much as possible. This is because in future there may be a lot of data and even it run smoothly now it may not be the case in future.

It is very common in BW to use look-ups of data (e.g. from DSO objects) while data transformation happens. Once we retrieve the data from DSO for sake of look-up tables we shall involve DSO’s table index within the SELECT statement. The index makes data retrieval much faster than w/o usage of index. In case of every transparent database table there is so called primary index in place. This index comprises of all key fields within the table.
To employ usage of the index by SELECT statement you need to ensure following. WHERE condition of the SELECT must contain as much fields from the index as possible. Or let’s say key fields of the table in case of primary index.


E.g. my DSO (C*) below (its active table is /BIC/AC*00) has following primary keys; 0MATERIAL, 0PLANT, 0ORDERITEM, 0BUS_EVENT, 0PRODORDER, 0FISCVARNT



     

















Now let’s have a look into two SELECT statements. One which does utilize the primary index on top of C* DSO and other one which does’t?

    SELECT plant prodorder gr_qty planordqty
      
INTO TABLE lt_mat
      
FROM /bic/acxxxx00
      
FOR ALL ENTRIES IN result_package
      
WHERE prodorder result_package-prodorder
        
AND plant     result_package-plant.

    SELECT plant prodorder gr_qty planordqty
      
INTO TABLE lt_mat
      
FROM /bic/acxxxx00
      
FOR ALL ENTRIES IN result_package
      
WHERE material  result_package-material
        
AND plant     result_package-plant
        
AND prodorder result_package-prodorder
        
AND fiscvarnt result_package-fiscvarnt.


Can you guess which SELECT is it? It is the second one. Actually in the first one there is first field from index 0MATERIAL missing in the WHERE condition. Therefore database optimizer will not use the primary index for the search operation and a full table scan is performed. In case of second SELECT there are four out of six fields are used in WHERE therefore optimized will use primary index.


Finally we would like to know if index was really used during our BW load. What can be done is that before kicking out the load we start SQL trace. This is done in TA ST05. You can set trace with filter on e.g. your user name and table from where we retrieving the data. Then just run a load. After the load is finished deactivate the trace and go to Display Trace in TA ST05. Display the trace result and for operations like OPEN or FETCH click Explain button from the toolbar. You will get a screen similar to below screenshot. Picture on left side shows situation with my 2nd SELECT – index was used and on right side is my 1st SELECT – on index usage. We can clearly recognize by key words INDEX RANGE SCAN that index was used. Also notice what estimated costs (e.g. CPU) are for both SELECTS.


















One more point regarding how system finds suitable indices for particular SQL SELECT. If WHERE condition is written in same sequence as key are specified then index is found faster. Therefore always try to align your WHERE condition as per sequence of index’s fields.

More information can be found:


Thursday, April 3, 2014

Benefits of #BWonHANA

This blog is originally posted on SCN:

There are many materials out there on SCN discussing the benefit of having BW running on HANA as database. There is even its own space dedicated to SAP NetWeaver BW Powered by SAP HANA. In this post I'd like to take a look on these benefits from pure BW point of view. My motivation is to have arguments for my clients while considering migration of their BW systems to HANA database. Assumption here is that no other change/optimization is done while migrating from current DB to HANA DB.
I hope I captured those topics discussed below right. However my knowledge of HANA / “BW on HANA” is just theoretical at this time. Therefore I appreciate your comments and/or correction to my findings.

1. New in-memory DB
Once you migrate your BW from the current DB to HANA DB you get basically new in-memory DB and all its features right away. This means that without any reimplementation of your existing data flows you can use power of in-memory HANA engine.
As HANA is in-memory DB any aggregates, indices and other materialized views on data are not needed anymore in BW system in most cases.  Means administration and maintenance of whole BW system is easier.
HANA tears down very consuming DB operations like data loading, DSO activation by its in-memory nature. I/O operations are faster as it is in-memory DB. Similarly no rollups on cubes after cube is loaded are needed. Also no Attribute Change Runs (ACR) while master data were changed are not necessary.

2. Data Flows/Transformations
There is no need to migrate your BW 3.x style data flows to 7.x style to run the BW system on HANA DB. Notice that 7.x data flow is mandatory for HANA-optimized InfoProviders only. Regarding existing transformations their certain parts of standard data loading process in BW is accelerated by HANA. Especially in BW 7.4 runs a standard transformation differently than it does in older releases of BW. System pushes down the transformation processing to HANA database. However this is only valid for transformations where no custom ABAP routines are used.

3. InfoProviders
By running BW on HANA you get following InfoProvider types. These are not new types of InfoProvs but they are optimized to be used on HANA.

HANA optimized DSO - notice that even this is new term “HANA optimized DSO” it already became obsolete. Earlier the DSO could have been converted into this type of DSO after migration to HANA. This is not the case anymore. As of NetWeaver BW 7.30 Support Package 10 HANA-optimized activation is supported for all Standard DataStore Objects by default. So no conversion needed to Standard DSOs.

With respect to different Support Pack there are following architectures of DSO:
  1. As of BW 7.30 SP05-09: Change Log of DSO is provided by HANA’s Calculation view. This means there is no persistency of data. This speeds up data activation and SID creation.
  2. As of BW 7.30 SP10: There is database table for Change log. By this we gain performance while loading the data from the DSO to further InfoProvs as less resource and memory consumption is achieved.
More information can be found here: DataStore Objects in SAP NetWeaver BW on SAP HANA

HANA optimized infocubes - Within classic BW infocubes there are 2 fact tables (F - normal one and E - compressed one) and several dimension tables as per cube setup. HANA optimized cubes are flat, there is no dimension tables and there is only one F table for facts. This means info cubes running on HANA gain faster data loads, their data model is simplified, remodeling is easier (e.g. while adding/removing new characteristics/key figure), no changes to cubes after migration to HANA.
Within BW on HANA cubes are become even less relevant from data storage perspective. In case there is no any business logic in place between DSO and cube layer there is no need to have cube layer. Report can run directly on top of the DSOs. Of course this needs to be approached by checking data flow one by one. If this is the case data model gets simplified. Be aware that there are still cases there cubes are needed. Just to name a few: usage of non-cumulative key figures in cube, external write access to cube, integrated planning.
More information can be found here: InfoProviders in SAP NetWeaver BW powered by SAP HANA

4. New InfoProviders as of BW 7.3
A bunch of new InfoProv types were introduced in BW version 7.3. Let’s see how they are supported while BW runs on HANA.

Semantic Partitioned Object (SPO) – SPO is used to store very large volumes of data as per partition defined based on business object. There are two cases depending weather SPO is based on DSO or on cube. In case of the cube it gets automatically HANA optimized. In case of DSO you may want to convert SPO to HANA optimized see note: 1685280 - SPO: Data conversion for SAP HANA-optimized partitions.

CompositeProvider – Enables combination of InfoProviders of type cube, DSO and Analytic Indexes (like BWA and Analysis Process Designer (ADP)) via UNION, INNER and LEFT OUTER JOIN. Such a scenario runs faster in BW on HANA as UNION/JOIN operations are executed in HANA and not on application server.

HybridProvider – Used for modeling for near real-time data access. It is combination of two InfoProv: one for historic data (e.g. cube) and other one for actual real time data (e.g. DSO loaded via Real Time Data Acquisition (RDA) type of DataSource). Here same rules apply as mentioned above for cube and DSO: in case of cube it is automatically HANA-optimized and in case of DSO it stays standard as it was before HANA migration.

VirtualProvider – either based on: Data Transfer Process (DTP), BAPIs or function module are used for e.g. reconciliation purposes of the data loaded in BW with a normal staging data flow and the source system. Such a VirtualProv runs in BW on HANA environment as well.
Other case within connection to VirtualProv can be with reference to HANA model. By this HANA model e.g. analytic or calculation view is exposed to BW’s InfoProv.

TransientProvider – As it has no any persistent BW metadata (nothing visible in BW’s Data Warehouse Workbench) there is nothing to be optimized by HANA. Actually TransientProv is used to consume a HANA model (Analytic or Calculation View) which is published into BW (transaction RSDD_HM_PUBLISH). So if you have some scenarios with TransientProv it should work in BW on HANA as well.

Analytic Index (AI) – Is data container in BWA stores the data in simply star schema format as facts and characteristics (dimensions) with attributes. The data for AI is prepared by Analysis Process Designer (APD).
Moreover while connecting of AI to TransientProvider: HANA model can be published in the BW as AI. TransientProvider is generated then on this AI. While having scenario where data is being changed very frequently; HANA model is changed also the AI is adjusted automatically.

Snapshot Index (SI) – If BEx query is marked as InfoProvider in BWA an index called Query Snapshot Index (QSI) is created. Such a SI for Query as InfoProv and SI for Virtual Prov are still supported in BW on HANA.

5. Process Chains
There are few process types that are obsolete in BW system running on HANA. These are Attribute Change Runs (ACR), aggregate roll-ups, cube roll-ups, cube’s index deletion/creation before/after the load. Existing chains having these processes will run without errors just those processes will not be executed. However clean-up is advised to be done after the migration to HANA.

6. Queries
BEx queries stay as they are and no change is needed. While query runtime HANA is leveraging column store and in-memory calculations as engine for query acceleration. The data is not replicated (e.g. in case of aggregate or BWA) – the query runs directly against primary data persistence.
Therefore queries should run at least as fast before HANA migration in BWA but better runtime is of course anticipated without any changes to queries itself.

7. Planning
When it comes to SAP BW’s planning application they traditionally run in BW’s application server. While HANA in place; planning functions are running in-memory. Therefore with no change on planning models, planning processes a performance boost is expected in BW on HANA in areas: dis/aggregation, copy, delete, set value, re-post operation, FOX formulas, conversions, revaluation etc.

8. Authorization
Authorization and all activities related to user access are managed by BW application. Therefore nothing has changes here while migration to HANA DB. All authorization concepts used before are being valid and used. Going forward if you will be using also purely HANA objects (e.g. HANA models: attribute/analytic/calculation views) these are managed by HANA privileges. They are less detailed comparing to BW authorization therefore if you need complex authorization you need to consume HAMA models via BW’s InfoProviders like Transient or Virtual one.
Notice that authorization must be already using BW 7.x technology prior DB migration to HANA.


Other sources of information on BW on HANA: