1. What are the extractor types?
The steps are:
5. What are Start routines, Transfer routines and Update routines?
6. What is the difference between start routine and update routine, when, how and why are they called?
7. What is the table that is used in start routines?
8. Explain how you used Start routines in your project?
9. What are Return Tables?
10. How do start routine and return table synchronize with each other?
12. What is compression?
14. What is table partitioning and what are the benefits of partitioning in an Info Cube?
15. How many extra partitions are created and why?
18. What are Conversion Routines for units and currencies in the update rule?
19. Can an Info Object be an Info Provider, how and why?
20. What is Open Hub Service?
Application Specific2. What are the steps involved in LO Extraction?
BW Content FI, HR, CO, SAP CRM, LO Cockpit
Customer-Generated Extractors
LIS, FI-SL, CO-PA
Cross Application (Generic Extractors)
DB View, InfoSet, Function Module
The steps are:
- RSA5 Select the Data Sources
- LBWE Maintain DataSources and Activate Extract Structures
- LBWG Delete Setup Tables
- 0LI*BW Setup tables
- RSA3 Check extraction and the data in Setup tables
- LBWQ Check the extraction queue
- LBWF Log for LO Extract Structures
- RSA7 BW Delta Queue Monitor
LBW0 Connecting LIS Info Structures to BW4. What is the difference between ODS and Info Cube and Multi Provider?
ODS: Provides granular data, allows overwrite and data is in transparent tables, ideal for drill down and RRI.
CUBE: Follows the star schema, we can only append data, ideal for primary reporting.
Multi Provider: Does not have physical data. It allows to access data from different Info Providers (Cube, ODS, Info Object). It is also preferred for reporting.
Start Routines: The start routine is run for each Data Package after the data has been written to the PSA and before the transfer rules have been executed. It allows complex computations for a key figure or a characteristic. It has no return value. Its purpose is to execute preliminary calculations and to store them in global Data Structures. This structure or table can be accessed in the other routines. The entire Data Package in the transfer structure format is used as a parameter for the routine.
Transfer / Update Routines: They are defined at the Info Object level. It is like the Start Routine. It is independent of the Data Source. We can use this to define Global Data and Global Checks.
Start routine can be used to access Info Package while update routines are used while updating the Data Targets.
Always the table structure will be the structure of an ODS or Info Cube. For example if it is an ODS then active table structure will be the table.
Start routines are used for mass processing of records. In start routine all the records of Data Package is available for processing. So we can process all these records together in start routine. In one of scenario, we wanted to apply size % to the forecast data. For example if material M1 is fore casted to say 100 in May. Then after applying size %(Small 20%, Medium 40%, Large 20%, Extra Large 20%), we wanted to have 4 records against one single record that is coming in the info package. This is achieved in start routine.
When we want to return multiple records, instead of single value, we use the return table in the Update Routine. Example: If we have total telephone expense for a Cost Center, using a return table we can get expense per employee.
Return table is used to return the Value following the execution of start routine11. What is the difference between V1, V2 and V3 updates?
V1 Update: It is a Synchronous update. Here the Statistics update is carried out at the same time as the document update (in the application tables).
V2 Update: It is an Asynchronous update. Statistics update and the Document update take place as different tasks.
V1 & V2 don't need scheduling.
Serialized V3 Update: The V3 collective update must be scheduled as a job (via LBWE). Here, document data is collected in the order it was created and transferred into the BW as a batch job. The transfer sequence may not be the same as the order in which the data was created in all scenarios. V3 update only processes the update data that is successfully processed with the V2 update.
It is a process used to delete the Request IDs and this saves space.13. What is Rollup?
This is used to load new Data Packages (requests) into the Info Cube aggregates. If we have not performed a rollup then the new Info Cube data will not be available while reporting on the aggregate.
It is the method of dividing a table which would enable a quick reference. SAP uses fact file partitioning to improve performance. We can partition only at 0CALMONTH or 0FISCPER. Table partitioning helps to run the report faster as data is stored in the relevant partitions. Also table maintenance becomes easier. Oracle, Informix, IBM DB2/390 supports table partitioning while SAP DB, Microsoft SQL Server, IBM DB2/400 do not support table portioning.
Two partitions are created for date before the begin date and after the end date.16. What are the options available in transfer rule?
- InfoObject
- Constant
- Routine
- Formula
We should define as many dimensions as possible and we have to take care that no single dimension crosses more than 20% of the fact table size.
Using this option we can write ABAP code for Units / Currencies conversion. If we enable this flag then unit of Key Figure appears in the ABAP code as an additional parameter. For example, we can convert units in Pounds to Kilos.
Yes, when we want to report on Characteristics or Master Data. We have to right click on the Info Area and select "Insert characteristic as data target". For example, we can make 0CUSTOMER as an Info Provider and report on it.
The Open Hub Service enables us to distribute data from an SAP BW system into external Data Marts, analytical applications, and other applications. We can ensure controlled distribution using several systems. The central object for exporting data is the Info Spoke. We can define the source and the target object for the data. BW becomes a hub of an enterprise
No comments:
Post a Comment