Search This Blog

Wednesday, April 21, 2010

Schedule V3 Run In SAP R/3

Description:


V1 - Synchronous update

V2 - Asynchronous update

V3 - Batch asynchronous update



These are different work processes on the application server that takes the update LUW (which may have various DB manipulation SQLs) from the running program and execute it. These are separated to optimize transaction processing capabilities.



Synchronous Updating (V1 Update)-->>

The statistics update is made synchronously with the document update.

While updating, if problems that result in the termination of the statistics update occur, the original documents are NOT saved. The cause of the termination should be investigated and the problem solved. Subsequently, the documents can be entered again.



Asynchronous Updating (V2 Update)-->>

With this update type, the document update is made separately from the statistics update. A termination of the statistics update has NO influence on the document update (see V1 Update).



Asynchronous Updating (V3 Update) -->>

With this update type, updating is made separately from the document update. The difference between this update type and the V2 Update lies, however, with the time schedule. If the V3 update is active, then the update can be executed at a later time.



If you create/change a purchase order (me21n/me22n), when you press 'SAVE' and see a success message (PO.... changed..), the update to underlying tables EKKO/EKPO has happened (before you saw the message). This update was executed in the V1 work process.



There are some statistics collecting tables in the system which can capture data for reporting. For example, LIS table S012 stores purchasing data (it is the same data as EKKO/EKPO stored redundantly, but in a different structure to optimize reporting). Now, these tables are updated with the txn you just posted, in a V2 process. Depending on system load, this may happen a few seconds later (after you saw the success message). You can see V1/V2/V3 queues in SM12 or SM13.



V3 is specifically for BW extraction. The update LUW for these is sent to V3 but is not executed immediately. You have to schedule a job (eg in LBWE definitions) to process these. This is again to optimize performance.



V2 and V3 are separated from V1 as these are not as realtime critical (updating statistical data). If all these updates were put together in one LUW, system performance (concurrency, locking etc) would be impacted.



Serialized V3 update is called after V2 has happened (this is how the code running these updates is written) so if you have both V2 and V3 updates from a txn, if V2 fails or is waiting, V3 will not happen yet.



BTW, 'serialized' V3 is discontinued now, in later releases of PI you will have only unserialized V3.



In contrast to V1 and V2 Updates , no single documents are updated. The V3 update is, therefore, also described as a collective update.



1. Application tables (R/3 tables)

2. Statistical tables (for reporting purpose)

3. update tables

4. BW queue



Statistical tables are for reporting on R/3 while update tables are for BW extraction. Is data stored redundantly in these two (three if you include application tables) sets of table?



We can say -->Yes it is.because



The Difference is the fact that update tables are temporary, V3 jobs continually refresh these tables (as I understand). This is different from statistics tables which continue to add all the data. Update tables can be thought of as a staging place on R/3 from where data is consolidated into packages and sent to the delta queue (by the V3 job).



Update tables can be bypassed (if you use 'direct' or 'queued' delta instead of V3) to send the updates (data) directly to the BW queue (delta queue). V3 is however better for performance and so it is an option alongwith others and it uses update tables.



Statistical table existed since pre BW era (for analytical reporting) and have continued and are in use when customers want their reporting on R/3.



The structure of statistical table might be different from the update table/BW queue, so, even though it is based on same data, these might be different subsets of the same superset.



V3 collective update means that the updates are going to be processed only when the V3 job has run. I am not sure about 'synchronous V3'. Do you mean serialized V3?



At the time of oltp transaction, the update entry is made to the update table. Once you have posted the txn, it is available in the update table and is waiting for the V3 job to run. When V3 job runs, it picks up these entries from update table and pushes into delta queue from where BW extraction job extracts it.

BW Statistics

Description:


BW statistics is nothing but the SAP deliverd 1multiprovider and 5 cubes which can get the statistics of the objects developed. We have to enable and activate the BW statistics for particular objects which you want to see the statistics and to gather required data. But this no way will improve the performance. But we can analyze the statistics data and based on the data can decide on the ways to improve performance i.e. setting the read mode, compression, partitioning, creation of aggregates etc.....



BW Statistics is a tool

-for the analysis and optimization of Business Information Warehouse processes.

-to get an overview of the BW load and analysis processe



The following objects can be analyzed here:

Roles

SAPBWusers

Aggregates

Queries

InfoCubes

InfoSources

ODS

DataSources

InfoObjects

The BW Statistics sub-area is the most important of the two

1. BW Statistics

2. BW Data Slice

BW Statistics data is stored in the Business Information Warehouse.



This information is provided by a MultiProvider (0BWTC_C10), which is based on several BW BasisCubes.



OLAP (0BWTC_C02)

OLAP Detail Navigation (0BWTC_C03)

Aggregates (0BWTC_C04)

WHM (0BWTC_C05)

Metadata ( 0BWTC_C08 )

Condensing InfoCubes (0BWTC_C09)

Deleting Data from an InfoCube (0BWTC_C11)

BW Data Slice to get an overview of the requested characteristic combinations for particular InfoCubes and of the number of records that were loaded. This information is based on the following BasisCubes:

-BW Data Slice

-Requests in the InfoCube

BW Data Slice

BW Data Slice contains information about which characteristic combinations of an InfoCube are to be loaded and with which request, that is, with which data request.



Requests in the InfoCube

The InfoCube Requests in the InfoCube does not contain any characteristic combinations you can create queries for this InfoCube that return the number of data records for the corresponding InfoCube and for the individual requests data flow fall into below data.

- data load data management

- data analysis

Migration of BI 3.5 Modeling to BI 7.x

Use Of Scenario : This document will gives us the step by step guide to migrate the 3.5 modeling to new BI 7.x. We have to migrate the all the modeling to BI 7.x because the business content provide the 3.x modeling with transfer rule, update rule, info source and old (3.5) data source.




Description:

Prerequisites for the migration.



Copy the info provider to Z info provider.



Copy the all the transformations Routines code to other documents as safer side. Because after migrate, all the ABAP code are shifted to OOPs code.



Please make sure that data source should migrate at the last.



Please find the Step by Step guide with screen shot.



1. First you have to copy the info provider and make another copy with Z name ( like original name 0FIGL_O02 and make it ZFIGL_O02) Please give the Z name and also put it in the Z info area.

2. Then first we have to migrate the Update rule with Z info source. Please make a separate copy off transformation routine will require later on. Like Interest Calculation Numerator Days 1 (Agreed) KF



PROGRAM for UPDATE_ROUTINE.

*$*$ begin of global - insert your declaration only below this line *-*

* TABLES: ...

* DATA: ...

*$*$ end of global - insert your declaration only before this line *-*FORM compute_data_field

TABLES MONITOR STRUCTURE RSMONITOR "user defined monitoring

USING COMM_STRUCTURE LIKE /BIC/CS80FIAR_O03

RECORD_NO LIKE SY-TABIX

RECORD_ALL LIKE SY-TABIX

SOURCE_SYSTEM LIKE RSUPDSIMULH-LOGSYS

CHANGING RESULT LIKE /BI0/V0FIAR_C03T-NETTAKEN

RETURNCODE LIKE SY-SUBRC

ABORT LIKE SY-SUBRC. "set ABORT <> 0 to cancel update

*

*$*$ begin of routine - insert your code only below this line *-*

* fill the internal table "MONITOR", to make monitor entries* result value of the routine

IF COMM_STRUCTURE-FI_DOCSTAT EQ 'C'.

RESULT = COMM_STRUCTURE-CLEAR_DATE - COMM_STRUCTURE-NETDUEDATE.

ELSE.

RESULT = 0.

endif.

* if the returncode is not equal zero, the result will not be updated

RETURNCODE = 0.

* if abort is not equal zero, the update process will be canceled

ABORT = 0.*$*$ end of routine - insert your code only before this line *-*

*

ENDFORM.



Then Right click update rules , additional functions --->Create transformations

Then use Copy info Source 3.x New Info source option to make new copy of the info source

Then give the Z name for that info source which will help you to make new copy of the info source.



3. Then Map and Activated ( Most of the fields are mapped automatically ) .Then Right click transfer rules, additional functions --> Create transformations Please assigned newly created info source with Use available infosource Option. Then map and activate.



4. Then Right click datasource, click migrate ,click with export. Please select only With export.



5.Now Migration part is completed and look at Routine code tat was provided.



Some Tips:

Do not use:

DATA:BEGIN OF itab OCCURS n,

fields...,

END OF itab.



REPLACED by

TYPES:BEGIN OF line_type,

fields...,

END OF line_type.

DATA itab TYPE TABLE OF line_type INITIAL SIZE n.



Internal tables with header lines are not allowed. Header line in an internal table is a default line that the system uses when looping through the internal table



Short forms of internal table line operations are not allowed. For example, you cannot use the syntax INSERT TABLE itab. However, you can use INSERT wa INTO TABLE itab



Transformations do not permit READ itab statement in which the system reads values from header lines.

For example, the code READ TABLE itab. is now outdated, but you could use the code READ TABLE itab WITH KEY . . . INTO wa.



Calling external subroutines using the syntax PERFORM FORM(PROG) is not allowed.

In this example, FORM is a subroutine in the program PROG.

BW Performance Tuning

Use Of Scenario : It is very important scenario.With a performance review or performance tuning project we may be able to avoid unnecessary investments in additional hardware through state of the art BW system design, server and database tuning.




Description:



Fast and reliable access to your information is one of the key success factors for any Business Intelligence or Data Warehousing application. Unfortunately performance tuning is one of those aspects that is often overlooked during implementations.



With a performance review or performance tuning project you may be able to avoid unnecessary investments in additional hardware through state of the art BW system design, server and database tuning. We see that most vendors and customers solve their performance issues by extending their hardware capabilities. element61 believes that an integrated architecture and a well performing data model are at least as important to create the expected performance and to decrease the cost.



We have a very experienced team on this topic and we are specialized in:



•State of the art SAP BW applications architecture. Our methodology has incorporated generally accepted best practices from data warehousing and Business Intelligence into a framework that is SAP BW specific.



•The dimensional model of InfoCubes is the most underestimated key factor that influences the performance of reporting and data loading. element61 has developed the Dimensional Modeling Optimizer, a SAP BSP application for BW that significantly reduces the development time of the optimal dimensional model of InfoCubes and that turns the art of data modeling into a science.



•Performance tuning of SAP BW systems is one of our specialties. It requires a specific mix of competencies in areas like database tuning, SAP BW system configuration and application architecture.



Increased end-user satisfaction, a better acceptance of your BW application, lower hardware and maintenance costs are just a few benefits worth mentioning. A well performing Business Warehouse application will enable executives and managers to make sound business decisions on time.



Performance Management:

element61 are firm believers and early adopters of hosted Business Intelligence & Performance Management solutions. We believe that in current BI & CPM initiatives too much effort, time and money is lost in setting up (and maintaining) the environments, while focus of management should be on data definition, data modeling, user requirements, data quality and analysis of information. Software-as-a-Service will also dramatically change the face of the BI & CPM industry.



Hosted Business Intelligence & Performance Management solutions take away the hurdle of :

•Selecting the right hardware

•Investing in the right hardware

•Selecting & investing in the right Operating System

•Selecting & investing in the right Database Management System (RDBMS)

•Installation of OS & RDBMS software

•Installation of the BI or CPM software

•Configuration of the BI or CPM software

•Integration with OS & RDBMS

•Security setup and maintenance

•Performance monitoring & tuning

•Support for keeping the systems "running"

•Patching of software

•Upgrading & migration of software & content

We proactively invest in pioneering in this area, regardless of the BI & CPM technologies you want to use. These can be the leading Performance Management suites or pure-play hosted BI & CPM solutions ( like Birst , Pivotlink, ... ). Also hardware, operating system and RDBMS can be hosted by your organisation and managed by us, hosted by a dedicated hosting company, or based on components "in the cloud".



element61 uses its vast experience to innovate in ways to more quickly deliver on the promise of Performance Management..

Integrating BO-Xcelsius file with BO-Crystal Reports

Use Of Scenario : To display formatted report (Crystal Report) where the Xcelsius file will be embedded within the Crystal report.




Step Wise process:



Introduction:

This document is intended to display formatted report (Crystal Report) where the Xcelsius file will be embedded within the Crystal report. Only Crystal 2008 will support embedding Flash file. Further the Xcelsius file (SWF) present within the crystal report can be operated within the same page. Here an excel sheet will be acting as the database for the crystal report. Database for the crystal Report can be anything like SAP R/3 or BW data, etc.



Proces Stpes:

1.Create an Excel database with some sample data and save it in local disk.





















2.Now the Xcelsius file has to be designed and SWF file will be generated. Now open the Xcelsius and place a List Box, Column chart and a Gauge in the Canvas. The List box will be displaying all the States, Column Chart will be used to display the Population on 2009 and 2010 of the selected state; and Gauge will be pointing the total population of the selected state.



3 .Select the List Box and apply the user defined properties.



4. Map the Label ranges by selecting the “A” column. Select Insertion type as “Filtered Rows” Map the Source Data ranges by selecting “A”, “B”, “C” and “D” columns. Map the Destination ranges by selecting “E”, “F”, “G” and “H” columns.



5. Select the Column Chart and apply the properties accordingly.



6. Create 2 Series and Name it as “2009” and “2010” Select 2009 series and Map cell “F1” to values(Y) Select 2010 series and Map cell “G1” to values(Y) Map the Category Labels (X) with cell “E1”



7. Select the Gauge and follow the properties that we have to define.



8. Map the Data with cell “H1” Change Maximum Limit to “500” In the Alerts tab, Check “Enable Alerts” and change “As Percent of Target” to 500



9.Open Data Manger and add a “Crystal Report Data Consumer” connection and map the cells as below. Map the Row Header ranges by selecting the “A” column. Map the Data ranges by selecting the “B” “C” and “D” columns. Save the Xcelsius file (.XLF) and export the Xcelsius file as SWF.



10.Now Open Crystal Reports 2008 and select Report Wizard. Now a new connection has to be opened by keeping the Excel file which we mentioned above will be acting as the database.



11.We have to select “Access/Excel” as the new connection and mention the path of the excel file by changing the Database Type as “Excel 8.0 .By clicking on “Finish” several properties can be assigned in the step wise process and by clicking on “Next”, like What are the fields to be displayed in the report Template to be used Selecting fields if summary is needed and etc assign the properties



12.While Clicking in “Finish” following screen displayed on Preview mode.





13.Now the Xcelsius (SWF) file has to be integrated and based on the users need, design has to be done in the formatted report. Go to design mode and click on INSERT -> FLASH and select the SWF file and place it in the crystal report.



14. Now the values has to be mapped to which will be from the database which we are using for the crystal report. Right click on the SWF file and select “Flash Data Expert” and the values have to be mapped for the SWF file.



15.Drag and Drop Sheet1_.State field to Insert row Label Drag and Drop Sheet1_.Population-2009, Sheet1_.Population-2010 and total fields to Insert Data value fields.

16.Now by clicking on “Ok” save the crystal report. The crystal report can be exported to “PDF” or “HTML” so that the report can be visualized interactively. From the above screen mention the path and export the report in HTML 4.0. Open the HTML file from the exported path and the report will generate as below.



17.When the database (Excel Sheet) is updated with more records, by clicking on “Refresh Data” icon in crystal report; the data will be updated and affected in the crystal report. The Xcelsius file will also be affected and displayed in the same manner as of crystal report.



18.Finally by refreshing Crystal report Save the Crystal report and export the same in HTML 4.0 format.

DB Connect Usage in SAP BI 7.0

Use Of Scenario : we can know the main usage of DB connect in BI7.0




Step Wise process:



Introduction :



In SAP Net weaver BI 7.0, we’ll study how to implement DB Connect, rather than common usage of flat files. Using DB Connect, BI offers flexible options for extracting data directly into BI from tables and views in database management systems that are connected to BI using connections other than default connection.



The DB Connect enhancements to database interface allow you to transfer data straight into BI from the database tables or views of external applications. You can use tables and views in database management systems that are supported by SAP to transfer data. You use Data Sources to make the data known to BI. The data is processed in BI in the same way as data from all other sources.



It’s to be noted that SAP DB Connect only supports certain Database Management systems (DBMS)

The following are the list of DBMS

Max DB [Previously SAP DB]

Informix

Microsoft SQL Server

Oracle

IBM DB2/390, IBM DB2/400, IBM DB2 UDB



Types:

There are 2 types of classification. One is the BI DBMS & the other is source DBMS.

The main thing which is, both these DBMS are supported on their respective operating system versions, only if SAP has released a DBSL. If not, they don’t meet the requirements & hence can’t perform DB Connect.



In this process we use a Data source, to make the data available to BI & transfer the data to the respective Info providers defined in BI system. Further, using the usual data accusation process we transfer data from DBs to BI system.



Using this SAP provides options for extracting data from external systems, in addition to extracting data using standard connection; you can extract data from tables/views in database management systems (DBMS)







Loading data from SAP Supporting DBMS into BI



Steps are as follows:-

1.Connecting a database to Source system -- Direct access to external DB

2.Using Data source, the structure for table/view must be known to BI.

Process Description Go to RSA1 à Source Systems à DB Connect à Create



Now, create the source system using

1. Logical System Name à MSSQL

2. Source System Name à MS SQL DB Connect

3. Type & Release



Now, Under DB Connect, we can see the name of our Source System (MS SQL DB Connect)

The logical DB Connect name is MSSQL. In Data sources we need to create an Application Component area to continue with the export

Goto RSA1 à Data sources à Create Application Component



After creating an Application Component Area called “ac_test_check”, we now have to create a Data source in the component area. So right click the Application component area à Create Data source (as in below figure).



The Data source name here is “ds_ac_tech”

The Source System here is the defined “MSSQL”

The type of data type data source that we have here is “Master Data Attributes”



The below screen shot describes how to perform extraction or loading using a Table/View. As the standard adapter is “Database Table” (by default), we can specify the Table/View here



Now, choose the data source from the DB Object Names.



Now, we have selected the “EMPLOYEES” as the Table/View.



Or we can choose the Table/View à “REGION”



We have 2 database fields

Region ID

Region Description





Now that the Data source has to be activated before it is loaded, we “ACTIVATE” it once.



After activation, the data records (4) are displayed. Eastern, Western, Northern & Southern



Right click the Info package DS_AC_TEST à Create Info package



We now create an Info package called “IP_DS_AC_TECH”, with Source system as MSSQL



Once done we perform a schedule on the Info package à “Start”

Now, we need to create an Info Area to create an Info provider (like Info Cube)

After creating the info cube we check for the data in the PSA by “Manage the PSA”

This can be also done using the Key controls (Ctrl + Shift + F6)

The number of records displayed: 4 No’s

Using the PSA Maintenance, we can view the following factors

1.Status

2.Data Packet

3.Data records

4.REGION ID

5.REGION Description



The Table/View “CUSTOMERS” is now chosen for Extraction. In the next tab we have “PROPOSAL”, which describes all the Database fields, and we have to specify the Data source fields, types & length.



Now, we create an Info package à IP_TEST_CUST



Now, go to RSA1 à Info objects à Info object (Test) à Create Info Object Catalog



Now, we can preview the Region ID & Region Description.

We now create 2 Info objects & pass the Region ID & Region Description to the 2 objects.

1.Region Description à Region2 (Region)

2. Region Description à reg_id (Region ids)

Now, these are the 2 variables created under the Info object “test2”

Ø Region (REGION2)

Ø Region ids (REG_ID)



We create Characteristic as Info Provider for the Master Data loading in the “Info Provider” section à Insert Characteristic as Info Provider



Now, we create a transformation using “Create Transformation” on the Region ids (Attributes)



We now choose the Source System after this à MSSQL à MS SQL DB CONNECT



After checking the Transformation mappings on the Region ID, we now perform a DTP Creation on the same Region ID (Attribute)

We choose the Target system (default) as Info object à Region ID à REG_ID & the Source Type as Data source with the Source System à MSSQL



After this step, we proceed with creating an Info package à IP_DS_TEDDY which has the source system as MSSQL. Further we start the scheduling of the Info package. Once the info package has been triggered we can go to “Maintain PSA” & monitor the status of data in PSA

Further, we EXECUTE the DTP. And, we can monitor transfer of data from PSA à Info cube



Results

Thus, the DB Connect process has been successfully demonstrated in SAP BI 7.0

RFC Connection

Use Of Scenario : Iuses in connecting legacy systems and uplaoding of data from sap or nonsap to vice-versa




Step Wise process:



Step1 :- On the BW side :-

1. Create a logical System. SPRO->ALE-> Sending &Receiving Systems -> Logical System-> New Entries (E.g 800 BWCLNT800)

2. Assign client to logical System.



Step 2 :- Same Procedure for r/3 on r/3 side to create a logical system.



Step3 :- BW side :- Create RFC Connection in SM59.

RFC destination name - Name should be logical system in r/3.

Connection type:- 3

1st tab technical settings

Target host :- IP address of r/3 server.

Sytem :03

2nd tab Logon/Security

Lang-En

Client-r/3 client no

user- r/3 user

Password - r/3 password.



Step 4:- R/3 same procedure SM59

RFC destination name - Name should be logical system in bw.

Connection type:- 0

1st tab technical settings

Target host :- IP address of r/3 server.

Sytem :03

2nd tab Logon/Security

Lang-En

Client-bw client no

user- bw user

Password - bw password.



Step 5 :- spro -> select img -> biw->links to other sytems -> links between r/3 and bw

create ALE user in S.S -> select bwaleremote -> back



Step 6 :- In bw



su01

username BWREMOTE

profiles S_BI_WHM_RFC

S_BI_WX_RFC

Save.



Step 7 :- In R/3

su01

username ALEREMOTE

profiles S_BI_WHM_RFC

S_BI_WX_RFC



Save



Step 8 :- In R/3

Create RFC user

su01

user RFCUser create

usertype system

pwd 1234

profiles SAP_ALL

SAP_NEW

S_BI_WX_RFC



Step9 :-



RSA1

se16

Table RSAMIN enter default client in the field ?BWMANDT RZ10



Step10 :- In bw



su01

user RFCUser create

usertype system

pwd 1234

profiles SAP_ALL

SAP_NEW

S_BI_WHM_RFC



Step11 :- In bw

RSA1 - Source system -> create

RFC destination



Target system host name of r/3

SID:

System no

Source system ALEREMOTE

pwd

Backgroung user :BWREMOTE

PWD: