Search This Blog

Wednesday, April 21, 2010

Schedule V3 Run In SAP R/3

Description:


V1 - Synchronous update

V2 - Asynchronous update

V3 - Batch asynchronous update



These are different work processes on the application server that takes the update LUW (which may have various DB manipulation SQLs) from the running program and execute it. These are separated to optimize transaction processing capabilities.



Synchronous Updating (V1 Update)-->>

The statistics update is made synchronously with the document update.

While updating, if problems that result in the termination of the statistics update occur, the original documents are NOT saved. The cause of the termination should be investigated and the problem solved. Subsequently, the documents can be entered again.



Asynchronous Updating (V2 Update)-->>

With this update type, the document update is made separately from the statistics update. A termination of the statistics update has NO influence on the document update (see V1 Update).



Asynchronous Updating (V3 Update) -->>

With this update type, updating is made separately from the document update. The difference between this update type and the V2 Update lies, however, with the time schedule. If the V3 update is active, then the update can be executed at a later time.



If you create/change a purchase order (me21n/me22n), when you press 'SAVE' and see a success message (PO.... changed..), the update to underlying tables EKKO/EKPO has happened (before you saw the message). This update was executed in the V1 work process.



There are some statistics collecting tables in the system which can capture data for reporting. For example, LIS table S012 stores purchasing data (it is the same data as EKKO/EKPO stored redundantly, but in a different structure to optimize reporting). Now, these tables are updated with the txn you just posted, in a V2 process. Depending on system load, this may happen a few seconds later (after you saw the success message). You can see V1/V2/V3 queues in SM12 or SM13.



V3 is specifically for BW extraction. The update LUW for these is sent to V3 but is not executed immediately. You have to schedule a job (eg in LBWE definitions) to process these. This is again to optimize performance.



V2 and V3 are separated from V1 as these are not as realtime critical (updating statistical data). If all these updates were put together in one LUW, system performance (concurrency, locking etc) would be impacted.



Serialized V3 update is called after V2 has happened (this is how the code running these updates is written) so if you have both V2 and V3 updates from a txn, if V2 fails or is waiting, V3 will not happen yet.



BTW, 'serialized' V3 is discontinued now, in later releases of PI you will have only unserialized V3.



In contrast to V1 and V2 Updates , no single documents are updated. The V3 update is, therefore, also described as a collective update.



1. Application tables (R/3 tables)

2. Statistical tables (for reporting purpose)

3. update tables

4. BW queue



Statistical tables are for reporting on R/3 while update tables are for BW extraction. Is data stored redundantly in these two (three if you include application tables) sets of table?



We can say -->Yes it is.because



The Difference is the fact that update tables are temporary, V3 jobs continually refresh these tables (as I understand). This is different from statistics tables which continue to add all the data. Update tables can be thought of as a staging place on R/3 from where data is consolidated into packages and sent to the delta queue (by the V3 job).



Update tables can be bypassed (if you use 'direct' or 'queued' delta instead of V3) to send the updates (data) directly to the BW queue (delta queue). V3 is however better for performance and so it is an option alongwith others and it uses update tables.



Statistical table existed since pre BW era (for analytical reporting) and have continued and are in use when customers want their reporting on R/3.



The structure of statistical table might be different from the update table/BW queue, so, even though it is based on same data, these might be different subsets of the same superset.



V3 collective update means that the updates are going to be processed only when the V3 job has run. I am not sure about 'synchronous V3'. Do you mean serialized V3?



At the time of oltp transaction, the update entry is made to the update table. Once you have posted the txn, it is available in the update table and is waiting for the V3 job to run. When V3 job runs, it picks up these entries from update table and pushes into delta queue from where BW extraction job extracts it.

BW Statistics

Description:


BW statistics is nothing but the SAP deliverd 1multiprovider and 5 cubes which can get the statistics of the objects developed. We have to enable and activate the BW statistics for particular objects which you want to see the statistics and to gather required data. But this no way will improve the performance. But we can analyze the statistics data and based on the data can decide on the ways to improve performance i.e. setting the read mode, compression, partitioning, creation of aggregates etc.....



BW Statistics is a tool

-for the analysis and optimization of Business Information Warehouse processes.

-to get an overview of the BW load and analysis processe



The following objects can be analyzed here:

Roles

SAPBWusers

Aggregates

Queries

InfoCubes

InfoSources

ODS

DataSources

InfoObjects

The BW Statistics sub-area is the most important of the two

1. BW Statistics

2. BW Data Slice

BW Statistics data is stored in the Business Information Warehouse.



This information is provided by a MultiProvider (0BWTC_C10), which is based on several BW BasisCubes.



OLAP (0BWTC_C02)

OLAP Detail Navigation (0BWTC_C03)

Aggregates (0BWTC_C04)

WHM (0BWTC_C05)

Metadata ( 0BWTC_C08 )

Condensing InfoCubes (0BWTC_C09)

Deleting Data from an InfoCube (0BWTC_C11)

BW Data Slice to get an overview of the requested characteristic combinations for particular InfoCubes and of the number of records that were loaded. This information is based on the following BasisCubes:

-BW Data Slice

-Requests in the InfoCube

BW Data Slice

BW Data Slice contains information about which characteristic combinations of an InfoCube are to be loaded and with which request, that is, with which data request.



Requests in the InfoCube

The InfoCube Requests in the InfoCube does not contain any characteristic combinations you can create queries for this InfoCube that return the number of data records for the corresponding InfoCube and for the individual requests data flow fall into below data.

- data load data management

- data analysis

Migration of BI 3.5 Modeling to BI 7.x

Use Of Scenario : This document will gives us the step by step guide to migrate the 3.5 modeling to new BI 7.x. We have to migrate the all the modeling to BI 7.x because the business content provide the 3.x modeling with transfer rule, update rule, info source and old (3.5) data source.




Description:

Prerequisites for the migration.



Copy the info provider to Z info provider.



Copy the all the transformations Routines code to other documents as safer side. Because after migrate, all the ABAP code are shifted to OOPs code.



Please make sure that data source should migrate at the last.



Please find the Step by Step guide with screen shot.



1. First you have to copy the info provider and make another copy with Z name ( like original name 0FIGL_O02 and make it ZFIGL_O02) Please give the Z name and also put it in the Z info area.

2. Then first we have to migrate the Update rule with Z info source. Please make a separate copy off transformation routine will require later on. Like Interest Calculation Numerator Days 1 (Agreed) KF



PROGRAM for UPDATE_ROUTINE.

*$*$ begin of global - insert your declaration only below this line *-*

* TABLES: ...

* DATA: ...

*$*$ end of global - insert your declaration only before this line *-*FORM compute_data_field

TABLES MONITOR STRUCTURE RSMONITOR "user defined monitoring

USING COMM_STRUCTURE LIKE /BIC/CS80FIAR_O03

RECORD_NO LIKE SY-TABIX

RECORD_ALL LIKE SY-TABIX

SOURCE_SYSTEM LIKE RSUPDSIMULH-LOGSYS

CHANGING RESULT LIKE /BI0/V0FIAR_C03T-NETTAKEN

RETURNCODE LIKE SY-SUBRC

ABORT LIKE SY-SUBRC. "set ABORT <> 0 to cancel update

*

*$*$ begin of routine - insert your code only below this line *-*

* fill the internal table "MONITOR", to make monitor entries* result value of the routine

IF COMM_STRUCTURE-FI_DOCSTAT EQ 'C'.

RESULT = COMM_STRUCTURE-CLEAR_DATE - COMM_STRUCTURE-NETDUEDATE.

ELSE.

RESULT = 0.

endif.

* if the returncode is not equal zero, the result will not be updated

RETURNCODE = 0.

* if abort is not equal zero, the update process will be canceled

ABORT = 0.*$*$ end of routine - insert your code only before this line *-*

*

ENDFORM.



Then Right click update rules , additional functions --->Create transformations

Then use Copy info Source 3.x New Info source option to make new copy of the info source

Then give the Z name for that info source which will help you to make new copy of the info source.



3. Then Map and Activated ( Most of the fields are mapped automatically ) .Then Right click transfer rules, additional functions --> Create transformations Please assigned newly created info source with Use available infosource Option. Then map and activate.



4. Then Right click datasource, click migrate ,click with export. Please select only With export.



5.Now Migration part is completed and look at Routine code tat was provided.



Some Tips:

Do not use:

DATA:BEGIN OF itab OCCURS n,

fields...,

END OF itab.



REPLACED by

TYPES:BEGIN OF line_type,

fields...,

END OF line_type.

DATA itab TYPE TABLE OF line_type INITIAL SIZE n.



Internal tables with header lines are not allowed. Header line in an internal table is a default line that the system uses when looping through the internal table



Short forms of internal table line operations are not allowed. For example, you cannot use the syntax INSERT TABLE itab. However, you can use INSERT wa INTO TABLE itab



Transformations do not permit READ itab statement in which the system reads values from header lines.

For example, the code READ TABLE itab. is now outdated, but you could use the code READ TABLE itab WITH KEY . . . INTO wa.



Calling external subroutines using the syntax PERFORM FORM(PROG) is not allowed.

In this example, FORM is a subroutine in the program PROG.

BW Performance Tuning

Use Of Scenario : It is very important scenario.With a performance review or performance tuning project we may be able to avoid unnecessary investments in additional hardware through state of the art BW system design, server and database tuning.




Description:



Fast and reliable access to your information is one of the key success factors for any Business Intelligence or Data Warehousing application. Unfortunately performance tuning is one of those aspects that is often overlooked during implementations.



With a performance review or performance tuning project you may be able to avoid unnecessary investments in additional hardware through state of the art BW system design, server and database tuning. We see that most vendors and customers solve their performance issues by extending their hardware capabilities. element61 believes that an integrated architecture and a well performing data model are at least as important to create the expected performance and to decrease the cost.



We have a very experienced team on this topic and we are specialized in:



•State of the art SAP BW applications architecture. Our methodology has incorporated generally accepted best practices from data warehousing and Business Intelligence into a framework that is SAP BW specific.



•The dimensional model of InfoCubes is the most underestimated key factor that influences the performance of reporting and data loading. element61 has developed the Dimensional Modeling Optimizer, a SAP BSP application for BW that significantly reduces the development time of the optimal dimensional model of InfoCubes and that turns the art of data modeling into a science.



•Performance tuning of SAP BW systems is one of our specialties. It requires a specific mix of competencies in areas like database tuning, SAP BW system configuration and application architecture.



Increased end-user satisfaction, a better acceptance of your BW application, lower hardware and maintenance costs are just a few benefits worth mentioning. A well performing Business Warehouse application will enable executives and managers to make sound business decisions on time.



Performance Management:

element61 are firm believers and early adopters of hosted Business Intelligence & Performance Management solutions. We believe that in current BI & CPM initiatives too much effort, time and money is lost in setting up (and maintaining) the environments, while focus of management should be on data definition, data modeling, user requirements, data quality and analysis of information. Software-as-a-Service will also dramatically change the face of the BI & CPM industry.



Hosted Business Intelligence & Performance Management solutions take away the hurdle of :

•Selecting the right hardware

•Investing in the right hardware

•Selecting & investing in the right Operating System

•Selecting & investing in the right Database Management System (RDBMS)

•Installation of OS & RDBMS software

•Installation of the BI or CPM software

•Configuration of the BI or CPM software

•Integration with OS & RDBMS

•Security setup and maintenance

•Performance monitoring & tuning

•Support for keeping the systems "running"

•Patching of software

•Upgrading & migration of software & content

We proactively invest in pioneering in this area, regardless of the BI & CPM technologies you want to use. These can be the leading Performance Management suites or pure-play hosted BI & CPM solutions ( like Birst , Pivotlink, ... ). Also hardware, operating system and RDBMS can be hosted by your organisation and managed by us, hosted by a dedicated hosting company, or based on components "in the cloud".



element61 uses its vast experience to innovate in ways to more quickly deliver on the promise of Performance Management..

Integrating BO-Xcelsius file with BO-Crystal Reports

Use Of Scenario : To display formatted report (Crystal Report) where the Xcelsius file will be embedded within the Crystal report.




Step Wise process:



Introduction:

This document is intended to display formatted report (Crystal Report) where the Xcelsius file will be embedded within the Crystal report. Only Crystal 2008 will support embedding Flash file. Further the Xcelsius file (SWF) present within the crystal report can be operated within the same page. Here an excel sheet will be acting as the database for the crystal report. Database for the crystal Report can be anything like SAP R/3 or BW data, etc.



Proces Stpes:

1.Create an Excel database with some sample data and save it in local disk.





















2.Now the Xcelsius file has to be designed and SWF file will be generated. Now open the Xcelsius and place a List Box, Column chart and a Gauge in the Canvas. The List box will be displaying all the States, Column Chart will be used to display the Population on 2009 and 2010 of the selected state; and Gauge will be pointing the total population of the selected state.



3 .Select the List Box and apply the user defined properties.



4. Map the Label ranges by selecting the “A” column. Select Insertion type as “Filtered Rows” Map the Source Data ranges by selecting “A”, “B”, “C” and “D” columns. Map the Destination ranges by selecting “E”, “F”, “G” and “H” columns.



5. Select the Column Chart and apply the properties accordingly.



6. Create 2 Series and Name it as “2009” and “2010” Select 2009 series and Map cell “F1” to values(Y) Select 2010 series and Map cell “G1” to values(Y) Map the Category Labels (X) with cell “E1”



7. Select the Gauge and follow the properties that we have to define.



8. Map the Data with cell “H1” Change Maximum Limit to “500” In the Alerts tab, Check “Enable Alerts” and change “As Percent of Target” to 500



9.Open Data Manger and add a “Crystal Report Data Consumer” connection and map the cells as below. Map the Row Header ranges by selecting the “A” column. Map the Data ranges by selecting the “B” “C” and “D” columns. Save the Xcelsius file (.XLF) and export the Xcelsius file as SWF.



10.Now Open Crystal Reports 2008 and select Report Wizard. Now a new connection has to be opened by keeping the Excel file which we mentioned above will be acting as the database.



11.We have to select “Access/Excel” as the new connection and mention the path of the excel file by changing the Database Type as “Excel 8.0 .By clicking on “Finish” several properties can be assigned in the step wise process and by clicking on “Next”, like What are the fields to be displayed in the report Template to be used Selecting fields if summary is needed and etc assign the properties



12.While Clicking in “Finish” following screen displayed on Preview mode.





13.Now the Xcelsius (SWF) file has to be integrated and based on the users need, design has to be done in the formatted report. Go to design mode and click on INSERT -> FLASH and select the SWF file and place it in the crystal report.



14. Now the values has to be mapped to which will be from the database which we are using for the crystal report. Right click on the SWF file and select “Flash Data Expert” and the values have to be mapped for the SWF file.



15.Drag and Drop Sheet1_.State field to Insert row Label Drag and Drop Sheet1_.Population-2009, Sheet1_.Population-2010 and total fields to Insert Data value fields.

16.Now by clicking on “Ok” save the crystal report. The crystal report can be exported to “PDF” or “HTML” so that the report can be visualized interactively. From the above screen mention the path and export the report in HTML 4.0. Open the HTML file from the exported path and the report will generate as below.



17.When the database (Excel Sheet) is updated with more records, by clicking on “Refresh Data” icon in crystal report; the data will be updated and affected in the crystal report. The Xcelsius file will also be affected and displayed in the same manner as of crystal report.



18.Finally by refreshing Crystal report Save the Crystal report and export the same in HTML 4.0 format.

DB Connect Usage in SAP BI 7.0

Use Of Scenario : we can know the main usage of DB connect in BI7.0




Step Wise process:



Introduction :



In SAP Net weaver BI 7.0, we’ll study how to implement DB Connect, rather than common usage of flat files. Using DB Connect, BI offers flexible options for extracting data directly into BI from tables and views in database management systems that are connected to BI using connections other than default connection.



The DB Connect enhancements to database interface allow you to transfer data straight into BI from the database tables or views of external applications. You can use tables and views in database management systems that are supported by SAP to transfer data. You use Data Sources to make the data known to BI. The data is processed in BI in the same way as data from all other sources.



It’s to be noted that SAP DB Connect only supports certain Database Management systems (DBMS)

The following are the list of DBMS

Max DB [Previously SAP DB]

Informix

Microsoft SQL Server

Oracle

IBM DB2/390, IBM DB2/400, IBM DB2 UDB



Types:

There are 2 types of classification. One is the BI DBMS & the other is source DBMS.

The main thing which is, both these DBMS are supported on their respective operating system versions, only if SAP has released a DBSL. If not, they don’t meet the requirements & hence can’t perform DB Connect.



In this process we use a Data source, to make the data available to BI & transfer the data to the respective Info providers defined in BI system. Further, using the usual data accusation process we transfer data from DBs to BI system.



Using this SAP provides options for extracting data from external systems, in addition to extracting data using standard connection; you can extract data from tables/views in database management systems (DBMS)







Loading data from SAP Supporting DBMS into BI



Steps are as follows:-

1.Connecting a database to Source system -- Direct access to external DB

2.Using Data source, the structure for table/view must be known to BI.

Process Description Go to RSA1 à Source Systems à DB Connect à Create



Now, create the source system using

1. Logical System Name à MSSQL

2. Source System Name à MS SQL DB Connect

3. Type & Release



Now, Under DB Connect, we can see the name of our Source System (MS SQL DB Connect)

The logical DB Connect name is MSSQL. In Data sources we need to create an Application Component area to continue with the export

Goto RSA1 à Data sources à Create Application Component



After creating an Application Component Area called “ac_test_check”, we now have to create a Data source in the component area. So right click the Application component area à Create Data source (as in below figure).



The Data source name here is “ds_ac_tech”

The Source System here is the defined “MSSQL”

The type of data type data source that we have here is “Master Data Attributes”



The below screen shot describes how to perform extraction or loading using a Table/View. As the standard adapter is “Database Table” (by default), we can specify the Table/View here



Now, choose the data source from the DB Object Names.



Now, we have selected the “EMPLOYEES” as the Table/View.



Or we can choose the Table/View à “REGION”



We have 2 database fields

Region ID

Region Description





Now that the Data source has to be activated before it is loaded, we “ACTIVATE” it once.



After activation, the data records (4) are displayed. Eastern, Western, Northern & Southern



Right click the Info package DS_AC_TEST à Create Info package



We now create an Info package called “IP_DS_AC_TECH”, with Source system as MSSQL



Once done we perform a schedule on the Info package à “Start”

Now, we need to create an Info Area to create an Info provider (like Info Cube)

After creating the info cube we check for the data in the PSA by “Manage the PSA”

This can be also done using the Key controls (Ctrl + Shift + F6)

The number of records displayed: 4 No’s

Using the PSA Maintenance, we can view the following factors

1.Status

2.Data Packet

3.Data records

4.REGION ID

5.REGION Description



The Table/View “CUSTOMERS” is now chosen for Extraction. In the next tab we have “PROPOSAL”, which describes all the Database fields, and we have to specify the Data source fields, types & length.



Now, we create an Info package à IP_TEST_CUST



Now, go to RSA1 à Info objects à Info object (Test) à Create Info Object Catalog



Now, we can preview the Region ID & Region Description.

We now create 2 Info objects & pass the Region ID & Region Description to the 2 objects.

1.Region Description à Region2 (Region)

2. Region Description à reg_id (Region ids)

Now, these are the 2 variables created under the Info object “test2”

Ø Region (REGION2)

Ø Region ids (REG_ID)



We create Characteristic as Info Provider for the Master Data loading in the “Info Provider” section à Insert Characteristic as Info Provider



Now, we create a transformation using “Create Transformation” on the Region ids (Attributes)



We now choose the Source System after this à MSSQL à MS SQL DB CONNECT



After checking the Transformation mappings on the Region ID, we now perform a DTP Creation on the same Region ID (Attribute)

We choose the Target system (default) as Info object à Region ID à REG_ID & the Source Type as Data source with the Source System à MSSQL



After this step, we proceed with creating an Info package à IP_DS_TEDDY which has the source system as MSSQL. Further we start the scheduling of the Info package. Once the info package has been triggered we can go to “Maintain PSA” & monitor the status of data in PSA

Further, we EXECUTE the DTP. And, we can monitor transfer of data from PSA à Info cube



Results

Thus, the DB Connect process has been successfully demonstrated in SAP BI 7.0

RFC Connection

Use Of Scenario : Iuses in connecting legacy systems and uplaoding of data from sap or nonsap to vice-versa




Step Wise process:



Step1 :- On the BW side :-

1. Create a logical System. SPRO->ALE-> Sending &Receiving Systems -> Logical System-> New Entries (E.g 800 BWCLNT800)

2. Assign client to logical System.



Step 2 :- Same Procedure for r/3 on r/3 side to create a logical system.



Step3 :- BW side :- Create RFC Connection in SM59.

RFC destination name - Name should be logical system in r/3.

Connection type:- 3

1st tab technical settings

Target host :- IP address of r/3 server.

Sytem :03

2nd tab Logon/Security

Lang-En

Client-r/3 client no

user- r/3 user

Password - r/3 password.



Step 4:- R/3 same procedure SM59

RFC destination name - Name should be logical system in bw.

Connection type:- 0

1st tab technical settings

Target host :- IP address of r/3 server.

Sytem :03

2nd tab Logon/Security

Lang-En

Client-bw client no

user- bw user

Password - bw password.



Step 5 :- spro -> select img -> biw->links to other sytems -> links between r/3 and bw

create ALE user in S.S -> select bwaleremote -> back



Step 6 :- In bw



su01

username BWREMOTE

profiles S_BI_WHM_RFC

S_BI_WX_RFC

Save.



Step 7 :- In R/3

su01

username ALEREMOTE

profiles S_BI_WHM_RFC

S_BI_WX_RFC



Save



Step 8 :- In R/3

Create RFC user

su01

user RFCUser create

usertype system

pwd 1234

profiles SAP_ALL

SAP_NEW

S_BI_WX_RFC



Step9 :-



RSA1

se16

Table RSAMIN enter default client in the field ?BWMANDT RZ10



Step10 :- In bw



su01

user RFCUser create

usertype system

pwd 1234

profiles SAP_ALL

SAP_NEW

S_BI_WHM_RFC



Step11 :- In bw

RSA1 - Source system -> create

RFC destination



Target system host name of r/3

SID:

System no

Source system ALEREMOTE

pwd

Backgroung user :BWREMOTE

PWD:

Uploading Sales Order data from SAP R/3 to BI using generic DataSources

PROCESS:


Steps to be followed in R/3



Step 1: Goto T-code RSO2. Give the DataSource name in Transactional data (as we are uploading transactional data) and click on Create

Step 2:

1.Specify the Application Component name

2.Fill in the text column with meaningful description

3.In “Extraction from DB view” give the table/View name from which you want to extract data.

4.SAVE

Note: Click on F4 to chose Application Component name

Step 3: Clicking on SAVE will lead to the below screen.

Note: We can do “Selection”, “Hide” the fields etc as shown in the below screen shot

Step 4: Go to T-code RSA6 to know if the DataSource is successfully activated

Note: Only Datasources which are activated will be displayed in RSA6 transaction

Steps to be followed in BI

Note: The RFC connection should be configured before you replicate data to BI system

Step1:

1.Goto RSA1 => DataSources

2.Rtclk on Sales and Distribution tree => Replicate Metadata 3.Activate

Step 2: Once the DataSource is replicated, you get a below popup, we need to choose “as DataSource” as we are using BI 7.0

Step 3: Create InfoPackage on the DataSource

InfoPackage: It acts as a link to extract data from Source system and gives it to PSA

Note: We can restrict data in “Data Selection” tab once you create InfoPackage as shown in the below screen shot

Step 4: Goto “Schedule” and click on “Start”

Step 5: Click on icon “Monitor” on the toolbar to view if the data is successfully loaded to PSA

PSA: Persistent Staging Area is a staging area where you can load the data temporarily before loading to target

There you can see status – “Request successfully loaded to PSA”then the data is loaded to PSA



















Steps to build the Target system



Step 6:

1.Rtclk on the DataSource, clk on “Create InfoCube”

Step 7: Create Dimension table. Dimension table will have all “Primary Keys”. In our example VBELN – Sales Doc No is the primary key

Note: We need to define at least one KeyFigure

Step 8: Create DTP

DTP: DTP is used to transfer data from PSA to Data target. In our case the target is InfoCube.

There we can show the Source and Target DTP

Note: In our case the source is DataSource and Target is InfoCube

Select Extraction Mode as “Full” as we are loading the data for the first time

Step 9:

1.Save and Activate

2.Go to “Execute” tab and click on Execute

Below screen shot shows the DTP monitor. Green indicates that the data is successfully loaded to the target system

Step 10: To see if the data is successfully loaded to target system,

Rtclk on the Infocube => Manage

Note: When you create DTP a Request Id will be generated as shown in the below screen shot

Step 11: Go to “Contents” => “InfoCube Content” to see the output

Using the Java BI SDK

USING THE JAVA BI SDK




Why use the BI Java SDK? What benefits does it bring you, and what sorts of business cases can be solved using the SDK? On this page, we illustrate the answers to these questions with a simple real-world business scenario using the BI Java SDK.





The Objective

You are a Java developer with data modeling experience, and your IT team has given you a business question to address. You'd like to integrate your solution into SAP's NetWeaver landscape, deploying a Java application seamlessly onto the Web Application Server, for ease of viewing from any Web browser for all authorized Enterprise Portal users.All you need to accomplish this objective is included with SAP's NetWeaver '04.



Business Question

Your business case deals with a classic inventory problem: calculation of economic order quantity and optimal reorder point. For a given product, the purchase manager needs answers to the following questions:



How many units should I order in one single batch (economic order quantity)?

When is the right time to order (optimal reorder point)?

What is my cost structure, and how does this vary with changes in input costs?

Why Use the BI Java SDK?

You decide to implement the scenario using the BI Java SDK primarily for the following reasons:



The information stored in SAP's Business Information Warehouse (BW) is available only in disparate objects, and the team does not have the time or budget to quickly develop an additional InfoProvider in this particular case.



BW does not offer the inherent simulation capabilities you need. While it offers variable usage in formulas and the deployment of Business Planning and Simulation (BPS), these approaches seem like overkill for a rather simple business problem.

You are a Java developer, and you wish to leverage this ability and the flexibility of custom application design together with the ease of integration and deployment of SAP's NetWeaver.

You'd like to integrate data from both relational and multidimensional (OLAP) data sources into one application.

What You Will Need

You'll use the following components, delivered with NetWeaver:



BI Java SDK

BI XMLA Connector - to access a BW InfoCube

BI SAP Query Connector - to access a BW InfoSet





Process Flow

The process flow between the various components for this business scenario is illustrated in the "swim-lane" diagram below:











As diagrammed, the process flows between the components as follows:



Input Information

The user enters some information into an iView in the Enterprise Portal, and request results.





Request Information from BW

An application created by the BI Java SDK receives the request and initializes the communication to BW.

The application then requests some basic information from BW, as described in the next steps.





Average Demand

An OLAP query is executed against an InfoCube in BW to request average demand.





Lead Time

An InfoSet Query is executed against a BW InfoObject to request the lead time.





Collect Information

The BI Java SDK application collects the information returned as results from BW.





Perform Calculation

The BI Java SDK application calculates economic order quantity and optimal reorder point based on formulas you defined.





Output Results

The iView reads the results of the calculations made by the SDK, and presents the numbers and a graph of the cost structure.

The Result

As a result of the user's input in the iView, he or she receives figures for economic order quantity and optimal reorder point. The user now knows, for example, that one batch of orders should contain 316 units, and a new order should be placed as soon as the inventory falls below 243 units.



In addition to the figures, the iView also displays a graph which shows the changes in the cost structure (annual cost), as a function of the order quantity.







The iView might look something like this:

Implementation

Your basic steps to implement this scenario are as follows:



Define your formulas for the calculation of economic order quantity and optimal reorder point.

Activate the necessary metadata objects from the BW Business Content.

Load the necessary data into your BW system.



Uses:

Using the BI Java SDK, we can

Connects via the BI XMLA Connector to a query in a BW InfoCube

Connects via the BI SAP Query Connector to an InfoSet query based on a BW InfoObject

Executes queries against the InfoProviders

Receives data from the InfoProviders

Performs calculations on the results

Formulates the results and sends the data to the Portal for display in an iView





The figure below shows what it might look like to work with the BI Java SDK within the integrated development editor Eclipse:







Once you're done with your application, all you have to do now is deploy it:

Deploy the BI Java Connectors into NetWeaver's Web Application Server.

Deploy the Java application as an iView into NetWeaver's Enterprise Portal.

Process Chains for DSO

Use Of Scenario : If you want to automatically load and schedule a hierarchy, you can include this as an application process in the procedure for a process chain.




Scenario:



A process chain is a sequence of processes that are scheduled to wait in the background for an event. Some of these processes trigger a separate event that can, in turn, start other processes. It looks similar to a flow chart. You define the list of Infopackages / DTPs that is needed to load the data, say delta or Full load. Then you schedule the Process chain as hourly, daily, monthly, etc, depending on the requirement



Note: Before you define process chains, you need to have the Objects ready i.e DSO, Cubes etc



Steps to Create Process Chains



Step 1: Go to RSPC transaction. Click on Create. Give Process chain name and Description.



Step 2: Give the Start process name and description and click on Enter



Note: Each process chain should start with a Start Process



Step 3: The next screen will show the Scheduling options.



There are two options:

1.Direct Scheduling: 2.Start Using Meta Chain or API

In my example I have chosen Direct Scheduling as I am processing only one chain i.e DSO. Click on “Change Selections”

In the below screen shot, you can give the scheduling details i.e the Immediate, Date& time etc. and click on SAVE

The Screen shot below indicates that we have started the process.

Step 4:

1.Click on the icon process types as shown in the below figure. You will get a list of options. 2.In my example I am scheduling for DSO. To process we need to have InfoPackage, DTP’s for the corresponding DSO 3.Open the tree Load Process and Post Processing, We need to drag and drop “Execute InfoPackage”

Step 5: Once you drag and drop the “Execute infopackage” we get the below Popup. We need to keyin the Infopackage name. To do this click on F4 and chose your Infopackage and click on ENTER



Step 5: Once you drag & drop the InfoPackage, the corresponding DTP’s and the corresponding Active Data table are automatically called.



Step 6: Save + Activate and Execute the Process.



Step 7: Once the process is completed, we can see the whole chain converted to Green, Which indicates the process is successfully completed



Note: Incase of errors in any step, the particular chain will be displayed in red



Step 8: We can see if the data is successfully updated by Right-click on the Datastore Data



Step 9: On selecting Administer Data Target, will lead you to InfoProvider Administration

.

Step 10: Click on “Active Data” tab to see the data successfully uploaded.



Note:

1.Similarly the process can be done for Cubes, Master data and Transactional data 2.When you create Process chains, by default they are stored in “Not Assigned”. If you want to create your own folders,

a) Click on the icon “Display Components” on your toolbar => choose F4 => Create

b. Give appropriate name for the folder => Enter

c. SAVE + ACTIVATE

Translation of BI objects

Use :With this an user can view the objects or reports in the required language which makes the implementation and operating Data warehouse user friendly.




Target projects:

Development and support based on SAP-BW 7.0



Description:

This document can be used for clear understanding of the functionality of Language Translation in SAP BW 7.0. In order to support BI system users who utilize the system in their first language, SAP Netweaver offers translation tools that allow us to translate the description of BI objects from their source language to target language using Translation Environment of the SAP Web Application Server (ABAP).



As we know that SAP Net weaver BI systems serves as Enterprise Data warehouse which is globally used across different time zones and using different languages.

The technique applies to BW Objects Translation as they are visible in reports to a high number of end users and also BI dataflow objects visible in Data warehouse Workbench. This technique is useful when implementing and operating an enterprise Data warehouse globally.



System settings for Language Translatio and Steps to be followed:



Language Environment setup Call transaction SLWA. On the languages tab page, choose languages.

The Translation Languages dialog box is displayed. Choose Add Languages, and select the required translation language. Choose a priority for the language. Specify a server for the language.



Now, save your settings. Add Target Language to a Client. Call transaction SLWA.

On the Languages tab page, choose Clients. Select the client to be defined.

In the Clients dialog box, enter the target languages for translation in the system.



If you want to specify the client as the translation client for all of the target languages defined in the system, choose Insert All Languages. Save your entries. If you want to create more translation clients, repeat these steps. Repeat these steps for all of the clients in which texts need translating.



Add clients to a target language Call transaction SLWA.

On the Languages tab page, choose Add Client. A dialog box appears containing all the available target languages in the system.



Select the target language. In the Clients dialog box, enter the client or clients in which texts need translating into the selected target language. Save.If you want to add more target languages to translation clients, repeat these steps.





Required Authorization: Transaction Object Value Purpose

LXE_MASTER/ SLWA/ SLWB OBJL 1 User have to create object list

LAOB 1

User need to define object types



GRAP

2

User need to create translation graphs



WRKL

1

User need to create global work list for each target language



LANG

1

User need to create target language, for setting priority



LACL

1

adding translation clients



USER

1

adding translation clients



COGA

1

Assigning Collections to a Translation Graph



COSE

1

Assigning Collections to a Translation Graph



USCO

1

Assigning Collections to a Translator



PREL

1

Releasing Object Lists/Transport Requests for Translation



A6

1

Setting object types for Automatic Distribution

A1

1

Setting up the target language



A5

1

Assigning packages





Step by step solution for Language Translation:

Create a piece list in the Data warehouse Workbench.

The piece list is basically a collection of objects to be translated.

Go to Transaction code RSA1 à Translation tab of the Data Warehouse workbench. Now create a piece list with the BW objects to be translated.

Collect the relevant objects in the Data warehouse workbench

RSA1 à Translation

E.g.: In this example, four info objects have been collected with the grouping option “Only necessary objects”.



Create a piece list for Translation In the dialog box, choose create to create piece list.

Create a piece list for Translation At the “select request type” dialog, select “Piece List”.



Create a piece list for Translation

Give the piece list a technical name (in customer namespace, starting with “Y” or “Z”) and a description. Save your piece list.



Create a piece list for Translation Assign your piece list to a package.



Create a piece list for Translation Choose continue in the Piece List Translation Dialog Box.



Create a Object List Go to transaction LXE_MASTER. In the evaluations tab, choose Object List.



Create an Object List. As shown below, in the object List dialog box, choose create option. Give description for your Object List and save. Assign the Piece List.

To include the Piece List in the Object List press “No Entries” button next to Transports (under ABAP system).



Enter the name of the Piece List you have created. If you want evaluate several piece lists, create entry for each one of them. Enter “*” as the transport type. Enter “*” as the transport status.

Save your entries. Save the Object List Parameters.



Generate the Object List. Choose “Activate”. Generate the Object List.



Schedule the background job which will generate the object list (the name of the job will be OBJLIST_xxxxx, xxxxx = number of the object list).



Editing a Work list in the Translation Environment

Once the job for generating the object list is finished, please go to the translation Environment (transaction code SE63).

In the next steps, all objects contained in the object List will be translated using a Work list.



Check your Default settings.

Go to Transaction SE63; from the Utilities Menu choose settings.

Here, it is necessary to ensure that the Source Language and Target Language settings correspond to your translation objective.



Check the specific format setting in which languages must be entered (For ex:”US English” is having Language Code “enUS” and not simply EN.) Save your settings.







Access your personal work list.

In the translation environment, from the Worklist menu, choose Standard. Enter a 4 digit work list number. If you want to reset an existing personal work list and/or a reservation number for accessing a personal work list, select the Reset Worklist / Reservation checkbox. Choose “continue”.



A) Access your personal work list using an object list. (Recommended approach).



This screen appears after having entered your work list number in the previous screen (SE63à WorklistàStandard) and choosing “continue”. Select your object list using the value help. Deactivate (all other) selections using the flag “Deactivate Selection”. Load the work list using the “Load objects” button.



B) Access Your Personal Worklist------Using a Piece List (Alternative approach for small number of objects only).



Enter your piece list using the value help (transport object or transport request). Deactivate (all other) selections using the flag “Deactivate Selection”. Load the Worklist using the “Load objects” button. Enter target Language texts for BI objects in the Worklist.



In order to translate a single object you have to double-click on it. In this simple scenario every single object will be translated individually and sequentially. If you want to translate a high amount of objects more efficiently, you may use proposal pools and automatic distribution. Please refer to the standard documentation on the translation tools for more information on these features.



Enter the equivalent text in the Target Language. Save. Set the standard quality status to “S” by clicking on the toolbox icon Choose the button Status “S”. Save the completed Work list.



Steps to be followed after changing the text for the required objects.

Transfer Texts to the M-Version of the BI Objects Using Program RSO_AFTER_TRANSLATION. Texts are only translated for objects in the A or D versions. Texts of objects in the Modified versions (M) are not translated. Therefore, a specific step is required - Otherwise, when you call the maintenance transactions in the target language, the system displays the old texts (if available) for these objects first. This program will transfer the newly translated texts to the modified version (M) of the BI objects.



Run the program RSO_AFTER_TRANSLATION.

Go to transaction SE38 and execute the program RSO_AFTER_TRANSLATION. Execute the program with the following parameters.

Request/Task: Name of the piece list.

Language Key: Source Language.

Language key: Target Language.



Verify the translation in the respective object maintenance screen.

Log on to the BI system in target language (Spanish (ES) here). Go to RSA1à Info Object maintenance in the data warehousing workbench. Find the info object for which translation is done:



Result:

Login into BI System in Source Language, here English (EN)

Enhancing standard BI content master data extractors

Use Of Scenario : SAP provides standard BI content master data to extract the data from R/3 to BI system. When we required additional data to be extracted, other than the standard BI extractor extract from R/3 , we need to enhance the R/3 data source extractor.


Description:



Here we are enhancing standard extractor 0CANDIDATE_ATTR with candidate name. We have candidate name in Z-table Zcandidate. We need to enhance 0CANDIDATE_ATTR to extract the field CAND_NAME.



Step-Wise Process:

Step 1: Enhance the extraction structure first.



Go to Transaction RSA6, select a standard extractor to be enhanced and click on “Enhance Extraction Structure”. We can also give our own name for this structure. Click on OK button.

Add fields to the new structure and activate it.



Step 2: Enhancement of data source Extractor to write code to manipulate the field “CAND_NAME”. Select the extractor and click on “Function Enhancement” or go to Transaction CMOD.

Create a project. Click on “Enhancement assignments” Search for Enhancement “RSAP0001”. Click on “components”.

List of function exits will be displayed.



1. EXIT_SAPLRSAP_001 - allows filling user-defined fields to the extract structure of transaction data for the SAP BW.



2. EXIT_SAPLRSAP_002 - allows filling user-defined fields to the extract structure for master data or texts in the SAP BW.



3. EXIT_SAPLRSAP_003 - allows changing the contents of transfer table that has been generated for a text request.



4. EXIT_SAPLRSAP_004 - allows changing the contents of a transfer table created for a hierarchy request.



Double click on “EXIT_SAPLRSAP_002” Double click on include where we can write code for enhancementWrite the below code to manipulate the data for new fields and activate the project.













Code:

CASE i_datasource. When '0CANDIDATE_ATTR'.

Data: loc_cand_data type RCF_S_BW_0CANDIDATE_ATTR,

loc_name type RCF_S_BW_0CANDIDATE_ATTR-cand_name.Loop at i_t_data into loc_cand_data.* write logic to manipulate new fields.

* select the cand_name form Zcandidate for corresponding candidate.

Select single cand_name from zcandidate into loc_name where objid = loc_cand_data-objid.* assign to work area.

loc_cand_data-cand_name = loc_name.

* modify the record

modify i_t_data from loc_cand_data.

Clear: loc_cand_data .Endloop.. ENDCASE. Again go to RSA6 to change and generate the data source.

Uncheck “Hide field” and “field only” options and also select “selection” option if any field needs to be as selection options for data extraction. Save and generate the data source

Now go to RSA3 and execute the extractor 0CANDIDATE_ATTR to check the data..

Enhance the info object and Replicate the datasource in BI system.

SAP BW Production Suport Issues and Solutions at all levels

Production Support Errors :


1) Invalid characters while loading: When you are loading data then you may get some special characters like @#$%...e.t.c.then BW will throw an error like Invalid characters then you need to go through this RSKC transaction and enter all the Invalid chars and execute. It will store this data in RSALLOWEDCHAR table. Then reload the data. You won't get any error because now these are eligible chars done by RSKC.
2) IDOC Or TRFC Error: We can see the following error at “Status” Screen:Sending packages from OLTP to BW lead to errorsDiagnosisNo IDocs could be sent to the SAP BW using RFC.System responseThere are IDocs in the source system ALE outbox that did not arrive in the ALE inbox of the SAP BW.Further analysis:Check the TRFC log.You can get to this log using the wizard or the menu path "Environment -> Transact. RFC -> In source system".Removing errors:If the TRFC is incorrect, check whether the source system is completely connected to the SAP BW. Check especially the authorizations of the background user in the source system.Action to be taken:If Source System connection is OK Reload the Data.



3)PROCESSING IS OVERDUE FOR PROCESSED IDOCsDiagnosis IDocs were found in the ALE inbox for Source System that is not updated. Processing is overdue. Error correction: Attempt to process the IDocs manually. You can process the IDocs manually using the Wizard or by selecting the IDocs with incorrect status and processing them manually. Analysis:After looking at all the above error messages we find that the IDocs are found in the ALE inbox for Source System that are not Updated.Action to be taken:We can process the IDocs manually via RSMO -> Header Tab -> Click on Process manually.



4) LOCK NOT SET FOR LOADING MASTER DATA ( TEXT / ATTRIBUE / HIERARCHY )Diagnosis User ALEREMOTE is preventing you from loading texts to characteristic 0COSTCENTER . The lock was set by a master data loading process with therequest number. System response For reasons of consistency, the system cannot allow the update to continue, and it has terminated the process. Procedure Wait until the process that is causing the lock is complete. You can call transaction SM12 to display a list of the locks. If a process terminates, the locks that have been set by this process are reset automatically. Analysis:After looking at all the above error messages we find that the user is “Locked”. Action to be taken:Wait for sometime & try reloading the Master Data manually from Info-package at RSA1.



5) Flat File Loading ErrorDetail Error MessageDiagnosis Data records were marked as incorrect in the PSA. System response The data package was not updated.Procedure Correct the incorrect data records in the data package (for example by manually editing them in PSA maintenance). You can find the error message for each record in the PSA by double-clicking on the record status.Analysis:After looking at all the above error messages we find that the PSA contains incorrect record.Action to be taken:To resolve this issue there are two methods:-i) We can rectify the data at the source system & then load the data.ii) We can correct the incorrect record in the PSA & then upload the data into the data target from here.



6) Object requested is currently locked by user ALEREMOTEDetail Error Message.DiagnosisAn error occurred in BI while processing the data. The error is documented in an error message.Object requested is currently locked by user ALEREMOTEProcedureLook in the lock table to establish which user or transaction is using the requested lock (Tools -> Administration -> Monitor -> Lock entries). Analysis:After looking at all the above error messages we find that the Object is “Locked. This must have happened since there might be some other back ground process runningAction to Be taken : Delete the error request. Wait for some time and Repeat the chain.



Idocs between R3 and BW while extraction

1)When BW executes an infopackage for data extraction the BW system sends a Request IDoc ( RSRQST ) to the ALE inbox of the source system.Information bundled in Request IDoc (RSRQST) is :

Request Id ( REQUEST )

Request Date ( REQDATE )

Request Time (REQTIME)

Info-source (ISOURCE)

Update mode (UPDMODE )

2)The source system acknowledges the receipt of this IDoc by sending an Info IDoc (RSINFO) back to BW system.The status is 0 if it is ok or 5 for a failure.

3)Once the source system receives the request IDoc successfully, it processes it according to the information in the request. This request starts the extraction process in the source system (typically a batch job with a naming convention that begins with BI_REQ). The request IDoc status now becomes 53 (application document posted). This status means the system cannot process the IDoc further.

4)The source system confirms the start of the extraction job by the source system to BW by sending another info IDoc (RSINFO) with status = 1

5)Transactional Remote Function Calls (tRFCs) extract and transfer the data to BW in data packages. Another info IDoc (RSINFO) with status = 2 sends information to BW about the data package number and number of records transferred

6)At the conclusion of the data extraction process (i.e., when all the data records are extracted and transferred to BW), an info IDoc (RSINFO) with status = 9 is sent to BW, which confirms the extraction process.



When is reconstruction allowed?



1. When a request is deleted in a ODS/Cube, will it be available under reconstruction.

Ans :Yes it will be available under reconstruction tab, only if the processing is through PSA Note: This function is particularly useful if you are loading deltas, that is, data that you cannot request again from the source system

2. Should the request be turned red before it is deleted from the target so as to enable reconstruction

Ans :To enable reconstruction you may not need to make the request red, but to enable repeat of last delta you have to make the request red before you delete it.

3. If the request is deleted with its status green, does the request get deleted from reconstruction tab too

Ans :No, it wont get deleted from reconstruction tab

4. Does the behaviour of reconstruction and deletion differ when the target is differnet. ODS and Cube

Ans :Yes





How to Debugg Update and transfer Rules

1.Go to the Monitor.

2. Select 'Details' tab.

3. Click the 'Processing'

4. Right click any Data Package.

5. select 'simulate update'

6. Tick the check boxes ' Activate debugging in transfer rules' and 'Activate debugging in update rules'.

7. Click 'Perform simulation'.





Error loading master data - Data record 1 ('AB031005823') : Version 'AB031005823' is not valid

ProblemCreated a flat file datasource for uploading master data.Data loaded fine upto PSA.Once the DTP which runs the transformation is scheduled, its ends up in error as below:





SolutionAfter refering to many links on sdn, i found that since the data is from an external file,the data will not be matching the SAP internal format. So it shud be followed that we mark "External" format option in the datasource ( in this case for Material ) and apply the conversion routine MATN1 as shown in the picture below



:Once the above changes are done, the load was successful.Knowledge from SDN forumsConversion takes place when converting the contents of a screen field from display format to SAP-internal format and vice versa and when outputting with the ABAP statement WRITE, depending on the data type of the field.



Check the info :http://help.sap.com/saphelp_nw04/helpdata/en/2b/e9a20d3347b340946c32331c96a64e/content.htmhttp://help.sap.com/saphelp_nw04/helpdata/en/07/6de91f463a9b47b1fedb5be18699e7/content.htmThis fm ( MATN1) will add leading ZEROS to the material number because when u query on MAKT with MATNR as just 123 you wll not be getting any values, so u should use this conversion exit to add leading zeros.’



Function module to make yellow request to RED

Use SE37, to execute the function module RSBM_GUI_CHANGE_USTATE.From the next screen, for I_REQUID enter that request ID and execute.From the next screen, select 'Status Erroneous' radiobutton and continue.This Function Module, change the status of request from Green / Yellow to RED.



What will happend if a request in Green is deleted?

Deleting green request is no harm. if you are loading via psa, you can go to tab 'reconstruction' and select the request and 'insert/reconstruct' to have them back.But,For example you will need to repeat this delta load from the source system. If you delete the green request then you will not get these delta records from the source system.Explanation :when the request is green, the source system gets the message that the data sent was loaded successfully, so the next time the load (delta) is triggered, new records are sent.If for some reason you need to repeat the same delta load from the source, then making the request red sends the message that the load was not successful, so do not discard these delta records.Delta queue in r/3 will keep until the next upload successfully performed in bw. The same records are then extracted into BW in the next requested delta load.



Appearence of Values for charecterstic input help screen

Which settings can I make for the input help and where can I maintain these settings?In general, the following settings are relevant and can be made for the input help for characteristics:Display: Determines the display of the characteristic values with the following options "Key", "Text", "Key and text" and "Text and key".Text type: If there are different text types (short, medium and long text), this determines which text type is to be used to display the text.Attributes: You can determine for the input help which attributes of the characteristic are displayed initially. When you have a large number of attributes for the characteristic, it makes sense to display only a selected number of attributes. You can also determine the display sequence of the attributes.F4 read mode: Determines in which mode the input help obtains its characteristic values. This includes the modes "Values from the master data table (M)", "Values from the InfoProvider (D)" and "Values from the Query Navigation (Q)".



Note that you can set a read mode, on the one hand, for the input help for query execution (for example, in the BEx Analyzer or in the BEX Web) and, on the other hand, for the input help for the query definition (in the BEx Query Designer). You can make these settings in InfoObject maintenance using transaction RSD1 in the context of the characteristic itself, in the InfoProvider-specific characteristic settings using transaction RSDCUBE in the context of the characteristic within an InfoProvider or in the BEx Query Designer in the context of the characteristic within a query. Note that not all the settings can be maintained in all the contexts. The following table shows where certain settings can be made:



Setting RSD1 RSDCUBE BExQueryDesigner

Display X X X

Text type X X X

Attributes X - -

Read mode -

Query execution X X X -

Query definition X - -

Note that the respective input helps in the BEx Web as well as in the BEx Tools enable you to make these settings again after executing the input help.





When do I use the settings from InfoObject maintenance (transaction RSD1) for the characteristic for the input help?



The settings that are made in InfoObject maintenance are active in the context of the characteristic and may be overwritten at higher levels if required. At present, the InfoProvider-specific settings and the BEx Query Designer belong to the higher levels. If the characteristic settings are not explicitly overwritten in the higher levels, the characteristic settings from InfoObject maintenance are active.When do I use the settings from the InfoProvider-specific characteristic settings (transaction RSDCUBE) for the input help?You can make InfoProvider-specific characteristic settings in transaction RSDCUBE -> context menu for a characteristic -> InfoProvider-specific properties.These settings for the characteristic are active in the context of the characteristic within an InfoProvider and may be overwritten in higher levels if required. At present, only the BEx Query Designer belongs to the higher levels. If the characteristic settings are not explicitly overwritten in the higher levels and settings are made in the InfoProvider-specific settings, these are then active. Note that the settings are thus overwritten in InfoObject maintenance.When do I use the settings in the BEx Query Designer for characteristics for the input help?In the BEx Query Designer, you can make the input help-relevant settings when you go to the tab pages "Display" and "Advanced" in the "Properties" area for the characteristic if this is selected.These settings for the characteristic are active in the context of the characteristic within a query and cannot be overwritten in higher levels at present. If the settings are not made explicitly, the settings that are made in the lower levels take effect.



How to supress messages generated by BW Queries

Standard Solution :

You might be aware of a standard solution. In transaction RSRT, select your query and click on the "message" button. Now you can determine which messages for the chosen query are not to be shown to the user in the front-end.



Custom Solution:

Only selected messages can be suppressed using the standard solution. However, there's a clever way you can implement your own solution... and you don't need to modify the system for it!All messages are collected using function RRMS_MESSAGE_HANDLING. So all you have to do is implement an enhancement at the start of this function module. Now it's easy. Code your own logic to check the input parameters like the message class and number and skip the remainder of the processing logic if you don't want this message to show up in the front-end.



FUNCTION rrms_message_handling.

StartENHANCEMENT 1 Z_CHECK_BIA.

* Filter BIA Message

if i_class = 'RSD_TREX' and i_type = 'W' and i_number = '136'*

just testing it.*

exitend if.

ENHANCEMENT

End

IMPORTING

------------

----------

----

EXCEPTIONS

Dummy ..



How can I display attributes for the characteristic in the input help?

Attributes for the characteristic can be displayed in the respective filter dialogs in the BEx Java Web or in the BEx Tools using the settings dialogs for the characteristic. Refer to the related application documentation for more details.In addition, you can determine the initial visibility and the display sequence of the attributes in InfoObject maintenance on the tab page "Attributes" -> "Detail" -> column "Sequence F4". Attributes marked with "0" are not displayed initially in the input help.



Why do the settings for the input help from the BEx Query Designer and from the InfoProvider-specific characteristic settings not take effect on the variable screen?

On the variable screen, you use input helps for selecting characteristic values for variables that are based on characteristics. Since variables from different queries and from potentially different InfoProviders can be merged on the variable screen, you cannot clearly determine which settings should be used from the different queries or InfoProviders. For this reason, you can use only the settings on the variable screen that were made in InfoObject maintenance.



Why do the read mode settings for the characteristic and the provider-specific read mode settings not take effect during the execution of a query in the BEx Analyzer?



The query read mode settings always take effect in the BEx Analyzer during the execution of a query. If no setting was made in the BEx Query Designer, then default read mode Q (query) is used.



How can I change settings for the input help on the variable screen in the BEx Java Web?



In the BEx Java Web, at present, you can make settings for the input help only using InfoObject maintenance. You can no longer change these settings subsequently on the variable screen.



Selective Deletion in Process Chain

The standard procedure :

Use Program RSDRD_DELETE_FACTS

1. Create a variant which is stored in the table RSDRBATCHPARA for the selection to be deleted from a data target.

2. Execute the generated program.

Observations:

The generated program executes will delete the data from data target based on the given selections. The program also removes the variant created for this selective deletion in the RSDRBATCHPARA table. So this generated program wont delete on the second execution.



If we want to use this program for scheduling in the process chain we can comment the step where the program remove the deletion of the generated variant.



Eg:REPORT ZSEL_DELETE_QM_C10 .

TYPE-POOLS: RSDRD, RSDQ, RSSG.

DATA:

L_UID TYPE RSSG_UNI_IDC25,

L_T_MSG TYPE RS_T_MSG,

L_THX_SEL TYPE RSDRD_THX_SEL

L_UID = 'D2OP7A6385IJRCKQCQP6W4CCW'.

IMPORT I_THX_SEL TO L_THX_SEL

FROM DATABASE RSDRBATCHPARA(DE) ID L_UID.

* DELETE FROM DATABASE RSDRBATCHPARA(DE) ID L_UID.CALL FUNCTION 'RSDRD_SEL_DELETION'

EXPORTING

I_DATATARGET = '0QM_C10'

I_THX_SEL =

L_THX_SELI_AUTHORITY_CHECK = 'X'

I_THRESHOLD = '1.0000E-01'

I_MODE = 'C'

I_NO_LOGGING = ''

I_PARALLEL_DEGREE = 1

I_NO_COMMIT = ''

I_WORK_ON_PARTITIONS = ''

I_REBUILD_BIA = ''

I_WRITE_APPLICATION_LOG = 'X'

CHANGING

C_T_MSG =

L_T_MSG.export l_t_msg to memory id sy-repid.

UPDATE RSDRBATCHREP

SET DELETEABLE = 'X'

WHERE REPID = 'ZSEL_DELETE_QM_C10'.





ABAP program to find prev request in cube and delete

There will be cases when we cannot use the SAP built-in settings to delete previous request..The logic to determine previous request may be so customised, a requirement.In such cases you can write a ABAP program which calculates previous request basing our own defined logic.Following are the tables used : RSICCONT ---(list of all requests in any particular cube)RSSELDONE ----- ( has got Reqnumb, source , target , selection infoobject , selections ..etc)Following is one example code. Logic is to select request based on selection conditions used in the infopackage:





TCURF, TCURR and TCURX

TCURF is always used in reference to Exchange rate.( in case of currency translation ).For example, Say we want to convert fig's from FROM curr to TO curr at Daily avg rate (M) and we have an exchange rate as 2,642.34. Factors for this currency combination for M in TCURF are say 100,000:1.Now the effective exchange rate becomes 0.02642.

Question ( taken from sdn ):can't we have an exchange rate of 0.02642 and not at all use the factors from TCURF table?.I suppose we have to still maintain factors as 1:1 in TCURF table if we are using exchange rate as 0.02642. am I right?. But why is this so?. Can't I get rid off TCURF.What is the use of TCURF co-existing with TCURR.Answer :Normally it's used to allow you a greater precision in calaculationsie 0.00011 with no factors gives a different result to0.00111 with factor of 10:1So basing on the above answer, TCURF allows greater precision in calculations.Its factor shud be considered before considering exchange rate



.-------------------------------------------------------------------------------------TCURRTCURR table is generally used while we create currency conversion types.The currency conversion types will refer to the entries in TCURR defined against each currency ( with time reference) and get the exchange rate factor from source currency to target currency.



-------------------------------------------------------------------------------------

TCURXTCURX

table is used to exactly define the correct number of decimal places for any currency. It shows effect in the BEx report output.

-------------------------------------------------------------------------------------



How to define F4 Order Help for infoobject for reporting

Open attributes tab of infoobject definition.In that you will observe column for F4 order help against each attribute of that infoobject like below :

This field defines whether and where the attribute should appear in the value help.Valid values:• 00: The attribute does not appear in the value help.•

01: The attribute appears at the first position (to the left) in the value help.•

02: The attribute appears at the second position in the valuehelp.•

03: ......• Altogether, only 40 fields are permitted in the input help. In addition to the attributes, the characteristic itsel, its texts, and the compounded characteristics are also generated in the input help. The total number of these fields cannot exceed 40.

So accordingly , the inofobjects are changed> Suppose if say for infobject 0vendor, if in case 0country ( which is an attribute of 0vendor) is not be shown in the F4 help of 0vendor , then mark 0 against the attribtue 0country in the infoobject definition of 0vendor.



Dimension Size Vs Fact Size

The current size of all dimensions can be monitored in relation to fact table by t-code se38 running report SAP_INFOCUBE_DESIGNS.Also,we can test the infocube design by RSRV tests.It gives out the dimension to fact ratio.



The ratio of a dimension should be less than 10% of the fact table.In the report,Dimension table looks like /BI[C/O]/D[xxx]

Fact table looks like /BI[C/0]/[E/F][xxx]

Use T-CODE LISTSCHEMA to show the different tables associated with a cube.



When a dimension grows very large in relation to the fact table, db optimizer can't choose efficient path to the data because the guideline of each dimension having less than 10 percent of the fact table's records has been violated.



The condition of having large data growth in a dimension is called degenerative dimension.To fix, move the characteristics to different dimensions. But can only be done when no data in the InfoCube.



Note : In case if you have requirement to include item level details in the cube, then may be the Dim to Fact size will obviously be more which you cant help it.But you can make the item charecterstic to be in a line item dimension in that case.Line item dimension is a dimension having only one charecterstic in it.In this case, Since there is only one charecterstic in the dimension, the fact table entry can directly link with the SID of the charecterstic without using any DIMid (Dimid in dimension table usually connects the SID of the charecterstic with the fact) .Since link happens by ignoring dimension table ( not in real sense ) , this will have faster query performance.



BW Main tables

Extractor related tables: ROOSOURCE - On source system R/3 server, filter by: OBJVERS = 'A'

Data source / DS type / delta type/ extract method (table or function module) / etc

RODELTAM - Delta type lookup table.

ROIDOCPRMS - Control parameters for data transfer from the source system, result of "SBIW - General setting - Maintain Control Parameters for Data Transfer" on OLTP system.

maxsize: Maximum size of a data packet in kilo bytes

STATFRQU: Frequency with which status Idocs are sent

MAXPROCS: Maximum number of parallel processes for data transfer

MAXLINES: Maximum Number of Lines in a DataPacketMAXDPAKS: Maximum Number of Data Packages in a Delta RequestSLOGSYS: Source system.



Query related tables:

RSZELTDIR: filter by: OBJVERS = 'A', DEFTP: REP - query, CKF - Calculated key figureReporting component elements, query, variable, structure, formula, etc

RSZELTTXT: Similar to RSZELTDIR. Texts of reporting component elementsTo get a list of query elements built on that cube:RSZELTXREF: filter by: OBJVERS = 'A', INFOCUBE= [cubename]

To get all queries of a cube:RSRREPDIR: filter by: OBJVERS = 'A', INFOCUBE= [cubename]To get query change status (version, last changed by, owner) of a cube:RSZCOMPDIR: OBJVERS = 'A' .



Workbooks related tables:

RSRWBINDEX List of binary large objects (Excel workbooks)

RSRWBINDEXT Titles of binary objects (Excel workbooks)

RSRWBSTORE Storage for binary large objects (Excel workbooks)

RSRWBTEMPLATE Assignment of Excel workbooks as personal templatesRSRWORKBOOK 'Where-used list' for reports in workbooks.



Web templates tables:

RSZWOBJ Storage of the Web Objects

RSZWOBJTXT Texts for Templates/Items/Views

RSZWOBJXREF Structure of the BW Objects in a TemplateRSZWTEMPLATE Header Table for BW HTML Templates.



Data target loading/status tables:

rsreqdone, " Request-Data

rsseldone, " Selection for current Request

rsiccont, " Request posted to which InfoCube

rsdcube, " Directory of InfoCubes / InfoProvider

rsdcubet, " Texts for the InfoCubes

rsmonfact, " Fact table monitor

rsdodso, " Directory of all ODS Objects

rsdodsot, " Texts of ODS Objectssscrfields. " Fields on selection screens



Tables holding charactoristics:

RSDCHABAS: fields

OBJVERS -> A = active; M=modified; D=delivered

(business content characteristics that have only D version and no A version means not activated yet)TXTTABFL -> = x -> has text

ATTRIBFL -> = x -> has attribute

RODCHABAS: with fields TXTSHFL,TXTMDFL,TXTLGFL,ATTRIBFL

RSREQICODS. requests in ods

RSMONICTAB: all requestsTransfer Structures live in PSAPODSD

/BIC/B0000174000 Trannsfer Structure

Master Data lives in PSAPSTABD

/BIC/HXXXXXXX Hierarchy:XXXXXXXX

/BIC/IXXXXXXX SID Structure of hierarchies:

/BIC/JXXXXXXX Hierarchy intervals

/BIC/KXXXXXXX Conversion of hierarchy nodes - SID:

/BIC/PXXXXXXX Master data (time-independent):

/BIC/SXXXXXXX Master data IDs:

/BIC/TXXXXXXX Texts: Char./BIC/XXXXXXXX Attribute SID table:



Master Data views

/BIC/MXXXXXXX master data tables:

/BIC/RXXXXXXX View SIDs and values:

/BIC/ZXXXXXXX View hierarchy SIDs and nodes:InfoCube Names in PSAPDIMD

/BIC/Dcube_name1 Dimension 1....../BIC/Dcube_nameA Dimension 10

/BIC/Dcube_nameB Dimension 11

/BIC/Dcube_nameC Dimension 12

/BIC/Dcube_nameD Dimension 13

/BIC/Dcube_nameP Data Packet

/BIC/Dcube_nameT Time/BIC/Dcube_nameU Unit

PSAPFACTD

/BIC/Ecube_name Fact Table (inactive)/BIC/Fcube_name Fact table (active)



ODS Table names (PSAPODSD)

BW3.5/BIC/AXXXXXXX00 ODS object XXXXXXX : Actve records

/BIC/AXXXXXXX40 ODS object XXXXXXX : New records

/BIC/AXXXXXXX50 ODS object XXXXXXX : Change log



Previously:

/BIC/AXXXXXXX00 ODS object XXXXXXX : Actve records

/BIC/AXXXXXXX10 ODS object XXXXXXX : New records



T-code tables:

tstc -- table of transaction code, text and program name

tstct - t-code text .



1What is tickets? And example?

The typical tickets in a production Support work could be:

1. Loading any of the missing master data attributes/texts.

2. Create ADHOC hierarchies.

3. Validating the data in Cubes/ODS.

4. If any of the loads runs into errors then resolve it.

5. Add/remove fields in any of the master data/ODS/Cube.

6. Data source Enhancement.

7. Create ADHOC reports.

1. Loading any of the missing master data attributes/texts - This would be done by scheduling the info packages for the attributes/texts mentioned by the client.

2. Create ADHOC hierarchies. - Create hierarchies in RSA1 for the info-object.

3. Validating the data in Cubes/ODS. - By using the Validation reports or by comparing BW data with R/3.

4. If any of the loads runs into errors then resolve it. - Analyze the error and take suitable action.

5. Add/remove fields in any of the master data/ODS/Cube. - Depends upon the requirement

6. Data source Enhancement.

7. Create ADHOC reports. - Create some new reports based on the requirement of client.

Tickets are the tracking tool by which the user will track the work which we do. It can be a change requests or data loads or whatever. They will of types critical or moderate. Critical can be (Need to solve in 1 day or half a day) depends on the client. After solving the ticket will be closed by informing the client that the issue is solved. Tickets are raised at the time of support project these may be any issues, problems.....etc. If the support person faces any issues then he will ask/request to operator to raise a ticket. Operator will raise a ticket and assign it to the respective person. Critical means it is most complicated issues ....depends how you measure this...hope it helps. The concept of Ticket varies from contract to contract in between companies. Generally Ticket raised by the client can be considered based on the priority. Like High Priority, Low priority and so on. If a ticket is of high priority it has to be resolved ASAP. If the ticket is of low priority it must be considered only after attending to high priority tickets.

Checklists for a support project of BPS - To start the checklist:

1) Info Cubes / ODS / data targets 2) planning areas 3) planning levels 4) planning packages 5) planning functions 6) planning layouts 7) global planning sequences 8) profiles 9) list of reports 10) process chains 11) enhancements in update routines 12) any ABAP programs to be run and their logic 13) major bps dev issues 14) major bps production support issues and resolution .



2 What are the tools to download tickets from client? Are there any standard tools or it depends upon company or client...?

Yes there are some tools for that. We use Hpopenview. Depends on client what they use. You are right. There are so many tools available and as you said some clients will develop their own tools using JAVA, ASP and other software. Some clients use just Lotus Notes. Generally 'Vantive' is used for tracking user requests and tickets.

It has a vantive ticket ID, field for description of problem, severity for the business, priority for the user, group assigned etc.

Different technical groups will have different group ID's.

User talks to Level 1 helpdesk and they raise ticket.

If they can solve issue for the issue, fine...else helpdesk assigns ticket to the Level 2 technical group.

Ticket status keeps changing from open, working, resolved, on hold, back from hold, closed etc. The way we handle the tickets vary depending on the client. Some companies use SAP CS to handle the tickets; we have been using Vantage to handle the tickets. The ticket is handled with a change request, when you get the ticket you will have the priority level with which it is to be handled. It comes with a ticket id and all. It's totally a client specific tool. The common features here can be - A ticket Id, - Priority, - Consultant ID/Name, - User ID/Name, - Date of Post, - Resolving Time etc.

There ideally is also a knowledge repository to search for a similar problem and solutions given if it had occurred earlier. You can also have training manuals (with screen shots) for simple transactions like viewing a query, saving a workbook etc so that such queried can be addressed by using them.

When the problem is logged on to you as a consultant, you need to analyze the problem, check if you have a similar problem occurred earlier and use ready solutions, find out the exact server on which this has occurred etc.

You have to solve the problem (assuming you will have access to the dev system) and post the solution and ask the user to test after the preliminary testing from your side. Get it transported to production once tested and posts it as closed i.e. the ticket has to be closed.



3.What is User Authorizations in SAP BW?

Authorizations are very important, for example you don't want the important financial report to all the users. so, you can have authorization in Object level if you want to keep the authorization for specific in object for this you have to check the Object as an authorization relevant in RSD1 and RSSM tcodes. Similarly you set up the authorization for certain users by giving that users certain auth. in PFCG tcode. Similarly you create a role and include the tcodes; BEx reports etc into the role and assign this role to the userid.
 
 RFC connection lost.


A) We can check out in the SM59 t-code

RFC Destination

+ R/3 connection

CRD client (our r/3 client)

double click..test connection in menu

 Invalid characters while loading.

A) Change them in the PSA & load them.

ALEREMOTE user is locked.

1) Ask your Basis team to release the user. It is mostly ALEREMOTE.

2) Password Changed

3) Number of incorrect attempts to login into ALEREMOTE.

4) USE SM12 t-code to find out are there any locks.

 Lower case letters not allowed.

A) Uncheck the lower case letters check box under "general" tab in the info object.

Object locked.

A) It might be locked by some other process or a user. Also check for authorizations

"Non-updated Idocs found in Source System".

A) Check whether any TRFC s strucked in source system. You can check it in SM58. If no TRFC s are there then better trigger the load again i.e., change the status to red, delete the bad request and run the load. Check whether the load is Delta or Full. If it is full just go ahead with the above step. If it is Delta check wheteher it is source system or BW. If it is source system go for repeat delta. If it is BW then you need to reset Data Mart status.

Extraction job aborted in r3

A) It might have got cancelled due to running for more than the expected time, or may be cancelled by R/3 users if it is hampering the performance.

Repeat of last delta not possible.

A) Repeat of last delta is not a option, but a mandate, in case the delta run failed. In such a case, we cant run the simple delta again. The system is going to run a repeat of last delta, so as to collect the failed delta's data again as well as any data collect till now right from failure. For repeat of last delta to be run, we should have the previous delta failed. Lets assume, in your case, I am not sure, if the delta got falied or deleted. If this is a deletion, then we need to catch hold of the request and make the status to red. This is going to tell the system that the delta failed(although it ran successfully, but you are forcing this message to the system). Now, if you run the delta info package, it will fetch the data related to 22nd plus all the changes from there on till today. An essential point here, you should not have run any deltas after 22nd till now. Then only repeat of last delta is going to work. Otherwise only option is to run a repair full request with data selections, if we know selection parameters.

Datasource not replicated

A) Replicate the datasource from R/3 through source system in the AWB & assign it to the infosource and activate it again.

 Datasource/transfer structure not active.

A) Use the function module RS_TRANSTRU_ACTIVATE_ALL to activate it

 ODS activation error.

A) ODS activation errors can occur mainly due to following reasons-

1.Invalid characters (# like characters)

2.Invalid data values for units/currencies etc

3.Invalid values for data types of char & key figures.

4.Error in generating SID values for some data.

SAP BW Important Transaction codes

1 RSA1 Administrator Work Bench


2 RSA11 Calling up AWB with the IC tree

3 RSA12 Calling up AWB with the IS tree

4 RSA13 Calling up AWB with the LG tree

5 RSA14 Calling up AWB with the IO tree

6 RSA15 Calling up AWB with the ODS tree

7 RSA2 OLTP Metadata Repository

8 RSA3 Extractor Checker

9 RSA5 Install Business Content

10 RSA6 Maintain DataSources

11 RSA7 BW Delta Queue Monitor

12 RSA8 DataSource Repository

13 RSA9 Transfer Application Components

14 RSD1 Characteristic maintenance

15 RSD2 Maintenance of key figures

16 RSD3 Maintenance of units

17 RSD4 Maintenance of time characteristics

18 RSBBS Maintain Query Jumps (RRI Interface)

19 RSDCUBE Start: InfoCube editing

20 RSDCUBED Start: InfoCube editing

21 RSDCUBEM Start: InfoCube editing

22 RSDDV Maintaining

23 RSDIOBC Start: InfoObject catalog editing

24 RSDIOBCD Start: InfoObject catalog editing

25 RSDIOBCM Start: InfoObject catalog editing

26 RSDL DB Connect - Test Program

27 RSDMD Master Data Maintenance w.Prev. Sel.

28 RSDMD_TEST Master Data Test

29 RSDMPRO Initial Screen: MultiProvider Proc.

30 RSDMPROD Initial Screen: MultiProvider Proc.

31 RSDMPROM Initial Screen: MultiProvider Proc.

32 RSDMWB Customer Behavior Modeling

33 RSDODS Initial Screen: ODS Object Processng

34 RSIMPCUR Load Exchange Rates from File

35 RSINPUT Manual Data Entry

36 RSIS1 Create InfoSource

37 RSIS2 Change InfoSource

38 RSIS3 Display InfoSource

39 RSISET Maintain InfoSets

40 RSKC Maintaining the Permittd Extra Chars

41 RSLGMP Maintain RSLOGSYSMAP

42 RSMO Data Load Monitor Start

43 RSMON BW Administrator Workbench

44 RSOR BW Metadata Repository

45 RSORBCT BI Business Content Transfer

46 RSORMDR BW Metadata Repository

47 RSPC Process Chain Maintenance

48 RSPC1 Process Chain Display

49 RSPCM Monitor daily process chains

50 RSRCACHE OLAP: Cache Monitor

51 RSRT Start of the report monitor

52 RSRT1 Start of the Report Monitor

53 RSRT2 Start of the Report Monitor

54 RSRTRACE Set trace configuration

55 RSRTRACETEST Trace tool configuration

56 RSRV Analysis and Repair of BW Objects

57 SE03 Transport Organizer Tools

58 SE06 Set Up Transport Organizer

59 SE07 CTS Status Display

60 SE09 Transport Organizer

61 SE10 Transport Organizer

62 SE11 ABAP Dictionary

63 SE18 Business Add-Ins: Definitions

64 RSDS Data Source Repository

65 SE19 Business Add-Ins: Implementations

66 SE19_OLD Business Add-Ins: Implementations

67 SE21 Package Builder

68 SE24 Class Builder

69 SE80 Object Navigator

70 RSCUSTA Maintain BW Settings

71 RSCUSTA2 ODS Settings

72 RSCUSTV*

73 RSSM Authorizations for Reporting

74 SM04 User List

75 SM12 Display and Delete Locks

76 SM21 Online System Log Analysis

77 SM37 Overview of job selection

78 SM50 Work Process Overview

79 SM51 List of SAP Systems

80 SM58 Asynchronous RFC Error Log

81 SM59 RFC Destinations (Display/Maintain)

82 LISTCUBE List viewer for InfoCubes

83 LISTSCHEMA Show InfoCube schema

84 WE02 Display IDoc

85 WE05 IDoc Lists

86 WE06 Active IDoc monitoring

87 WE07 IDoc statistics

88 WE08 Status File Interface

89 WE09 Search for IDoc in Database

90 WE10 Search for IDoc in Archive

91 WE11 Delete IDocs

92 WE12 Test Modified Inbound File

93 WE14 Test Outbound Processing

94 WE15 Test Outbound Processing from MC

95 WE16 Test Inbound File

96 WE17 Test Status File

97 WE18 Generate Status File

98 WE19 Test tool

99 WE20 Partner Profiles

100 WE21 Port definition

101 WE23 Verification of IDoc processing

102 DB02 Tables and Indexes Monitor

103 DB14 Display DBA Operation Logs

104 DB16 Display DB Check Results

105 DB20 Update DB Statistics

106 KEB2 DISPLAY DETAILED INFO ON CO-PA DATA SOURCE R3

107 RSD5 Edit InfoObjects

108 SM66 Global work process Monitor

109 SM12 Display and delete locks

110 OS06 Local Operating System Activity

111 RSKC Maintaining the Permittd Extra Chars

112 SMQ1 qRFC Monitor (Outbound Queue)