Suppose we have some serial numbers in a flat file source. We want to load the serial numbers in two target files one containing the EVEN serial numbers and the other file having the ODD ones.
Ans.
After the Source Qualifier place a Router Transformation . Create two Groups namely EVEN and ODD, with filter conditions as MOD(SERIAL_NO,2)=0 and MOD(SERIAL_NO,2)=1 respectively. Then output the two groups into two flat file targets.
Router Transformation Groups Tab
5. Deleting duplicate row for FLAT FILE sources
Ans: Now suppose the source system is a Flat File. Here in the Source Qualifier you will not be able to select the distinct clause as it is disabled due to flat file source table. Hence the next approach may be we use aSorter Transformation and check the Distinct option. When we select the distinct option all the columns will the selected as keys, in ascending order by default.
Sorter Transformation DISTINCT clause
6. Deleting Duplicate Record Using Informatica Aggregator
Ans: Other ways to handle duplicate records in source batch run is to use an Aggregator Transformation and using the Group By checkbox on the ports having duplicate occurring data. Here you can have the flexibility to select the last or the first of the duplicate column value records. Apart from that using Dynamic Lookup Cache of the target table and associating the input ports with the lookup port and checking the Insert Else Update option will help to eliminate the duplicate records in source and hence loading unique records in the target.
7. Loading Multiple Target Tables Based on Conditions
Suppose we have some serial numbers in a flat file source. We want to load the serial numbers in two target files one containing the EVEN serial numbers and the other file having the ODD ones.
Ans. After the Source Qualifier place a Router Transformation . Create two Groups namely EVEN and ODD, with filter conditions as MOD(SERIAL_NO,2)=0 and MOD(SERIAL_NO,2)=1 respectively. Then output the two groups into two flat file targets.
Router Transformation Groups Tab
Normalizer Related Questions
8. Suppose in our Source Table we have data as given below:
Student Name | Maths | Life Science | Physical Science |
Sam | 100 | 70 | 80 |
John | 75 | 100 | 85 |
Tom | 80 | 100 | 85 |
We want to load our Target Table as:
Student Name | Subject Name | Marks |
Sam | Maths | 100 |
Sam | Life Science | 70 |
Sam | Physical Science | 80 |
John | Maths | 75 |
John | Life Science | 100 |
John | Physical Science | 85 |
Tom | Maths | 80 |
Tom | Life Science | 100 |
Tom | Physical Science | 85 |
Describe your approach.
Ans. Here to convert the Rows to Columns we have to use the Normalizer Transformation followed by an Expression Transformation to Decode the column taken into consideration. For more details on how the mapping is performed please
9. Name the transformations which converts one to many rows i.e increases the i/p:o/p row count. Also what is the name of its reverse transformation.
Ans.Normalizer as well as Router Transformations are the Active transformation which can increase the number of input rows to output rows.
Aggregator Transformation is the active transformation that performs the reverse action.
10. Suppose we have a source table and we want to load three target tables based on source rows such that first row moves to first target table, secord row in second target table, third row in third target table, fourth row again in first target table so on and so forth. Describe your approach.
Ans. We can clearly understand that we need a Router transformation to route or filter source data to the three target tables. Now the question is what will be the filter conditions. First of all we need an Expression Transformation where we have all the source table columns and along with that we have another i/o port say seq_num, which is gets sequence numbers for each source row from the port NextVal of aSequence Generator start value 0 and increment by 1. Now the filter condition for the three router groups will be:
MOD(SEQ_NUM,3)=1 connected to 1st target table, MOD(SEQ_NUM,3)=2 connected to 2nd target table, MOD(SEQ_NUM,3)=0 connected to 3rd target table.
Router Transformation Groups Tab
Loading Multiple Flat Files using one mapping
11. Suppose we have ten source flat files of same structure. How can we load all the files in target database in a single batch run using a single mapping.
Ans.After we create a mapping to load data in target database from flat files, next we move on to the session property of the Source Qualifier. To load a set of source files we need to create a file say final.txt containing the source falt file names, ten files in our case and set the Source filetype option as Indirect. Next point this flat file final.txt fully qualified through Source file directory and Source filename .
12. How can we implement Aggregation operation without using an Aggregator Transformation in Informatica.
Ans.We will use the very basic concept of the Expression Transformation that at a time we can access the previous row data as well as the currently processed data in an expression transformation. What we need is simple Sorter, Expression and Filter transformation to achieve aggregation at Informatica level.
13. Suppose in our Source Table we have data as given below:
Student Name | Subject Name | Marks |
Sam | Maths | 100 |
Tom | Maths | 80 |
Sam | Physical Science | 80 |
John | Maths | 75 |
Sam | Life Science | 70 |
John | Life Science | 100 |
John | Physical Science | 85 |
Tom | Life Science | 100 |
Tom | Physical Science | 85 |
We want to load our Target Table as:
Student Name | Maths | Life Science | Physical Science |
Sam | 100 | 70 | 80 |
John | 75 | 100 | 85 |
Tom | 80 | 100 | 85 |
Describe your approach.
Ans.Here our scenario is to convert many rows to one rows, and the transformation which will help us to achieve this is Aggregator . Our Mapping will look like this:
Mapping using sorter and Aggregator
We will sort the source data based on STUDENT_NAME ascending followed by SUBJECT ascending.
Sorter Transformation
Now based on STUDENT_NAME in GROUP BY clause the following output subject columns are populated as
MATHS: MAX(MARKS, SUBJECT='Maths')
LIFE_SC: MAX(MARKS, SUBJECT='Life Science')
PHY_SC: MAX(MARKS, SUBJECT='Physical Science')
Aggregator Transformation
Revisiting Source Qualifier Transformation
14. What is a Source Qualifier? What are the tasks we can perform using a SQ and why it is an ACTIVE transformation?
Ans. A Source Qualifier is an Active and Connected Informatica transformation that reads the rows from a relational database or flat file source.
We can configure the SQ to join [Both INNER as well as OUTER JOIN] data originating from the same source database.
We can use a source filter to reduce the number of rows the Integration Service queries.
We can specify a number for sorted ports and the Integration Service adds an ORDER BY clause to the default SQL query.
We can choose Select Distinct option for relational databases and the Integration Service adds a SELECT DISTINCT clause to the default SQL query.
Also we can write Custom/Used Defined SQL query which will override the default query in the SQ by changing the default settings of the transformation properties.
Aslo we have the option to write Pre as well as Post SQL statements to be executed before and after the SQ query in the source database.
Since the transformation provides us with the property Select Distinct , when the Integration Service adds a SELECT DISTINCT clause to the default SQL query, which in turn affects the number of rows returned by the Database to the Integration Service and hence it is an Active transformation.
15. What happens to a mapping if we alter the datatypes between Source and its corresponding Source Qualifier?
Ans.The Source Qualifier transformation displays the transformation datatypes. The transformation datatypes determine how the source database binds data when the Integration Service reads it.
Now if we alter the datatypes in the Source Qualifier transformation or the datatypes in the source definition and Source Qualifier transformation do not match, the Designer marks the mapping as invalidwhen we save it.
16 Suppose we have used the Select Distinct and the Number Of Sorted Ports property in the SQ and then we add Custom SQL Query. Explain what will happen.
AnS.Whenever we add Custom SQL or SQL override query it overrides the User-Defined Join, Source Filter, Number of Sorted Ports, and Select Distinct settings in the Source Qualifier transformation. Hence only the user defined SQL Query will be fired in the database and all the other options will be ignored .
17. Describe the situations where we will use the Source Filter, Select Distinct and Number Of Sorted Ports properties of Source Qualifier transformation.
Ans.Source Filter option is used basically to reduce the number of rows the Integration Service queries so as to improve performance.
Select Distinct option is used when we want the Integration Service to select unique values from a source, filtering out unnecessary data earlier in the data flow, which might improve performance.
Number Of Sorted Ports option is used when we want the source data to be in a sorted fashion so as to use the same in some following transformations like Aggregator or Joiner, those when configured for sorted input will improve the performance.
18. What will happen if the SELECT list COLUMNS in the Custom override SQL Query and the OUTPUT PORTS order in SQ transformation do not match?
Ans.Mismatch or Changing the order of the list of selected columns to that of the connected transformation output ports may result is session failure.
19. What happens if in the Source Filter property of SQ transformation we include keyword WHERE say, WHERE CUSTOMERS.CUSTOMER_ID > 1000.
Ans.We use source filter to reduce the number of source records. If we include the string WHERE in the source filter, the Integration Service fails the session .
20. Describe the scenarios where we go for Joiner transformation instead of Source Qualifier transformation.
Ans.While joining Source Data of heterogeneous sources as well as to join flat files we will use the Joiner transformation.
Use the Joiner transformation when we need to join the following types of sources:
Join data from different Relational Databases.
Join data from different Flat Files.
Join relational sources and flat files.
21. What is the maximum number we can use in Number Of Sorted Ports for Sybase source system.
Ans.
Sybase supports a maximum of 16 columns in an ORDER BY clause. So if the source is Sybase, do not sort more than 16 columns.
22. Suppose we have two Source Qualifier transformations SQ1 and SQ2 connected to Target tables TGT1 and TGT2 respectively. How do you ensure TGT2 is loaded after TGT1?
Ans.
If we have multiple Source Qualifier transformations connected to multiple targets, we can designate the order in which the Integration Service loads data into the targets.
In the Mapping Designer, We need to configure the Target Load Plan based on the Source Qualifier transformations in a mapping to specify the required loading order.
Target Load Plan Ordering
23. Suppose we have a Source Qualifier transformation that populates two target tables. How do you ensure TGT2 is loaded after TGT1?
Ans.
In the Workflow Manager, we can Configure Constraint based load ordering for a session. The Integration Service orders the target load on a row-by-row basis. For every row generated by an active source, the Integration Service loads the corresponding transformed row first to the primary key table, then to the foreign key table.
Hence if we have one Source Qualifier transformation that provides data for multiple target tables having primary and foreign key relationships, we will go for Constraint based load ordering.
Revisiting Filter Transformation
24. What is a Filter Transformation and why it is an Active one?
Ans.
A Filter transformation is an Active and Connected transformation that can filter rows in a mapping.
Only the rows that meet the Filter Condition pass through the Filter transformation to the next transformation in the pipeline. TRUE and FALSE are the implicit return values from any filter condition we set. If the filter condition evaluates to NULL, the row is assumed to be FALSE.
The numeric equivalent of FALSE is zero (0) and any non-zero value is the equivalent of TRUE.
As an ACTIVE transformation, the Filter transformation may change the number of rows passed through it. A filter condition returns TRUE or FALSE for each row that passes through the transformation, depending on whether a row meets the specified condition. Only rows that return TRUE pass through this transformation. Discarded rows do not appear in the session log or reject files.
Revisiting Joiner Transformation
25. What is a Joiner Transformation and why it is an Active one?
Ans.
A Joiner is an Active and Connected transformation used to join source data from the same source system or from two related heterogeneous sources residing in different locations or file systems.
The Joiner transformation joins sources with at least one matching column. The Joiner transformation uses a condition that matches one or more pairs of columns between the two sources.
The two input pipelines include a master pipeline and a detail pipeline or a master and a detail branch. The master pipeline ends at the Joiner transformation, while the detail pipeline continues to the target.
In the Joiner transformation, we must configure the transformation properties namely Join Condition, Join Type and Sorted Input option to improve Integration Service performance.
The join condition contains ports from both input sources that must match for the Integration Service to join two rows. Depending on the type of join selected, the Integration Service either adds the row to the result set or discards the row .
The Joiner transformation produces result sets based on the join type, condition, and input data sources. Hence it is an Active transformation.
26. State the limitations where we cannot use Joiner in the mapping pipeline.
Ans.
The Joiner transformation accepts input from most transformations. However, following are the limitations:
Joiner transformation cannot be used when either of the input pipeline contains an Update Strategy transformation.
Joiner transformation cannot be used if we connect a Sequence Generator transformation directly before the Joiner transformation.
27. Out of the two input pipelines of a joiner, which one will you set as the master pipeline?
Ans.
During a session run, the Integration Service compares each row of the master source against the detail source.
The master and detail sources need to be configured for optimal performance .
To improve performance for an Unsorted Joiner transformation, use the source with fewer rows as the master source. The fewer unique rows in the master, the fewer iterations of the join comparison occur, which speeds the join process.
When the Integration Service processes an unsorted Joiner transformation, it reads all master rows before it reads the detail rows. The Integration Service blocks the detail source while it caches rows from the master source . Once the Integration Service reads and caches all master rows, it unblocks the detail source and reads the detail rows.
To improve performance for a Sorted Joiner transformation, use the source with fewer duplicate key values as the master source.
When the Integration Service processes a sorted Joiner transformation, it blocks data based on the mapping configuration and it stores fewer rows in the cache, increasing performance. Blocking logic is possible if master and detail input to the Joiner transformation originate from different sources . Otherwise, it does not use blocking logic. Instead, it stores more rows in the cache.
28. What are the different types of Joins available in Joiner Transformation?
Ans.
In SQL, a join is a relational operator that combines data from multiple tables into a single result set. The Joiner transformation is similar to an SQL join except that data can originate from different types of sources.
The Joiner transformation supports the following types of joins :
Normal
Master Outer
Detail Outer
Full Outer
Join Type property of Joiner Transformation
Note: A normal or master outer join performs faster than a full outer or detail outer join.
29. Define the various Join Types of Joiner Transformation.
Ans.
In a normal join , the Integration Service discards all rows of data from the master and detail source that do not match, based on the join condition.
A master outer join keeps all rows of data from the detail source and the matching rows from the master source. It discards the unmatched rows from the master source.
A detail outer join keeps all rows of data from the master source and the matching rows from the detail source. It discards the unmatched rows from the detail source.
A full outer join keeps all rows of data from both the master and detail sources.
30. Describe the impact of number of join conditions and join order in a Joiner Transformation.
Ans.
We can define one or more conditions based on equality between the specified master and detail sources.
Both ports in a condition must have the same datatype . If we need to use two ports in the join condition with non-matching datatypes we must convert the datatypes so that they match. The Designer validates datatypes in a join condition.
Additional ports in the join condition increases the time necessary to join two sources.
The order of the ports in the join condition can impact the performance of the Joiner transformation. If we use multiple ports in the join condition, the Integration Service compares the ports in the order we specified.
NOTE: Only equality operator is available in joiner join condition.
31. How does Joiner transformation treat NULL value matching.
Ans.
The Joiner transformation does not match null values .
For example, if both EMP_ID1 and EMP_ID2 contain a row with a null value, the Integration Service does not consider them a match and does not join the two rows.
To join rows with null values, replace null input with default values in the Ports tab of the joiner, and then join on the default values.
Note: If a result set includes fields that do not contain data in either of the sources, the Joiner transformation populates the empty fields with null values. If we know that a field will return a NULL and we do not want to insert NULLs in the target, set a default value on the Ports tab for the corresponding port.
32. Suppose we configure Sorter transformations in the master and detail pipelines with the following sorted ports in order: ITEM_NO, ITEM_NAME, PRICE.
When we configure the join condition, what are the guidelines we need to follow to maintain the sort order?
Ans.
If we have sorted both the master and detail pipelines in order of the ports say ITEM_NO, ITEM_NAME and PRICE we must ensure that:
Use ITEM_NO in the First Join Condition.
If we add a Second Join Condition, we must use ITEM_NAME.
If we want to use PRICE as a Join Condition apart from ITEM_NO, we must also use ITEM_NAME in the Second Join Condition.
If we skip ITEM_NAME and join on ITEM_NO and PRICE, we will lose the input sort order and the Integration Service fails the session .
33. What are the transformations that cannot be placed between the sort origin and the Joiner transformation so that we do not lose the input sort order.
Ans.
The best option is to place the Joiner transformation directly after the sort origin to maintain sorted data.
However do not place any of the following transformations between the sort origin and the Joiner transformation:
Custom
Unsorted Aggregator
Normalizer
Rank
Union transformation
XML Parser transformation
XML Generator transformation
Mapplet [if it contains any one of the above mentioned transformations]
34. Suppose we have the EMP table as our source. In the target we want to view those employees whose salary is greater than or equal to the average salary for their departments.
Describe your mapping approach.
Ans.
To start with the mapping we need the following transformations:
After the Source qualifier of the EMP table place a Sorter Transformation . Sort based on DEPTNO port.
Sorter Ports Tab
Sorted Aggregator Transformation . Here we will find out the AVERAGE SALARY for each (GROUP BY) DEPTNO .
When we perform this aggregation, we lose the data for individual employees. To maintain employee data, we must pass a branch of the pipeline to the Aggregator Transformation and pass a branch with the same sorted source data to the Joiner transformation to maintain the original data. When we join both branches of the pipeline, we join the aggregated data with the original data.
Aggregator Ports Tab
Aggregator Properties Tab
So next we need Sorted Joiner Transformation to join the sorted aggregated data with the original data, based on DEPTNO .
Here we will be taking the aggregated pipeline as the Master and original dataflow as Detail Pipeline.
Joiner Condition Tab
Joiner Properties Tab
After that we need a Filter Transformation to filter out the employees having salary less than average salary for their department.
Filter Condition: SAL>=AVG_SAL
Filter Properties Tab
Lastly we have the Target table instance.
Revisiting Sequence Generator Transformation
35 What is a Sequence Generator Transformation?
Ans.
A Sequence Generator transformation is a Passive and Connected transformation that generates numeric values.
It is used to create unique primary key values, replace missing primary keys, or cycle through a sequential range of numbers.
This transformation by default contains ONLY Two OUTPUT ports namely CURRVAL and NEXTVAL . We cannot edit or delete these ports neither we cannot add ports to this unique transformation.
We can create approximately two billion unique numeric values with the widest range from 1 to 2147483647.
36. Define the Properties available in Sequence Generator transformation in brief.
Ans.
Sequence Generator Properties | Description |
Start Value | Start value of the generated sequence that we want the Integration Service to use if we use the Cycle option. If we select Cycle, the Integration Service cycles back to this value when it reaches the end value. Default is 0. |
Increment By | Difference between two consecutive values from the NEXTVAL port. Default is 1. |
End Value | Maximum value generated by SeqGen. After reaching this value the session will fail if the sequence generator is not configured to cycle. Default is 2147483647. |
Current Value | Current value of the sequence. Enter the value we want the Integration Service to use as the first value in the sequence. Default is 1. |
Cycle | If selected, when the Integration Service reaches the configured end value for the sequence, it wraps around and starts the cycle again, beginning with the configured Start Value. |
Number of Cached Values | Number of sequential values the Integration Service caches at a time. Default value for a standard Sequence Generator is 0. Default value for a reusable Sequence Generator is 1,000. |
Reset | Restarts the sequence at the current value each time a session runs. This option is disabled for reusable Sequence Generator transformations. |
37. Suppose we have a source table populating two target tables. We connect the NEXTVAL port of the Sequence Generator to the surrogate keys of both the target tables.
38. Will the Surrogate keys in both the target tables be same? If not how can we flow the same sequence values in both of them.
Ans.
When we connect the NEXTVAL output port of the Sequence Generator directly to the surrogate key columns of the target tables, the Sequence number will not be the same .
A block of sequence numbers is sent to one target tables surrogate key column. The second targets receives a block of sequence numbers from the Sequence Generator transformation only after the first target table receives the block of sequence numbers.
Suppose we have 5 rows coming from the source, so the targets will have the sequence values as TGT1 (1,2,3,4,5) and TGT2 (6,7,8,9,10). [Taken into consideration Start Value 0, Current value 1 and Increment by 1.
Now suppose the requirement is like that we need to have the same surrogate keys in both the targets.
Then the easiest way to handle the situation is to put an Expression Transformation in between the Sequence Generator and the Target tables. The SeqGen will pass unique values to the expression transformation, and then the rows are routed from the expression transformation to the targets.
Sequence Generator
39 . Suppose we have 100 records coming from the source. Now for a target column population we used a Sequence generator.
Suppose the Current Value is 0 and End Value of Sequence generator is set to 80. What will happen?
Ans.
End Value is the maximum value the Sequence Generator will generate. After it reaches the End value the session fails with the following error message:
TT_11009 Sequence Generator Transformation: Overflow error.
Failing of session can be handled if the Sequence Generator is configured to Cycle through the sequence, i.e. whenever the Integration Service reaches the configured end value for the sequence, it wraps around and starts the cycle again, beginning with the configured Start Value.
40. What are the changes we observe when we promote a non resuable Sequence Generator to a resuable one?
And what happens if we set the Number of Cached Values to 0 for a reusable transformation?
Ans.
When we convert a non reusable sequence generator to resuable one we observe that the Number of Cached Values is set to 1000 by default; And the Reset property is disabled.
When we try to set the Number of Cached Values property of a Reusable Sequence Generator to 0 in the Transformation Developer we encounter the following error message:
The number of cached values must be greater than zero for reusable sequence transformation.
Nice one.
ReplyDeleteFor informatica scenario base questions: informatica scenarios
It will be helping very nice in the first stage of informatica learners...............
ReplyDeleteHi
ReplyDeleteI like this post:
You create good material for community.
Please keep posting.
Let me introduce other material that may be good for net community.
Source: Target interview questions
Best rgs
Peter
Do You Know How To Integrate The Data By Using Informatica ETL Tool?, Learn at
ReplyDeletehttp://www.dwbiadda.com/course/informatica-online-training/
It will be Oracle DBA Training in Chennai helping especially befuddling Oracle Training in Chennai in the first go through Hadoop Training in Chennai of informatica learners......
ReplyDeleteIt's really helpful for me to understand. Thanks.If anyone wants to Learn visit this page Spring Training in Chennai
ReplyDeleteif learned in this site.what are the tools using in sql server environment and in warehousing have the solution thank .. Msbi Training in Chennai
ReplyDeleteif i share this blog weblogic Server Training in Chennai aims to teach professionals and beginners to have perfect solution of their learning needs in server technologies. weblogic server Training in Chennai
ReplyDeleteAwesome blog if our training additional way as an SQL and PL/SQL trained as individual, you will be able to understand other applications more quickly and continue to build your skill set which will assist you in getting hi-tech industry jobs as possible in future courese of action..visit this blog
ReplyDeleteplsql in Chennai
greenstechnologies.in:
very nice blogs!!! i have to learning for lot of information for this sites...
ReplyDeleteandroid Training in Chennai
Nice List of Citations..
ReplyDeletehttp://www.informaticaonlinetraining.co/
Good collection of interview questions
ReplyDeleteBest Informatica Online Training By 9+Years Of Realtime Expert
Below is the link for Course Content and Demo Class
https://informaticaonlinetraing.blogspot.com
Very good collection of questions and answers thank you for sharing this info. Know more about Informatica Data Quality Training
ReplyDeleteThanks for share this do follow backlink..
ReplyDeletepython training in chennai
ReplyDeleteThis is excellent information. It is amazing and wonderful to visit your site.Thanks for sharng this information,this is useful to me...
Android training in chennai
Ios training in chennai
Hi, Nice post. Thanks to shared about informatica interview questions and answers with me.
ReplyDeleteOracle dba certification cost in chennai | Oracle courses
Inspiring writings and I greatly admired what you have to say , I hope you continue to provide new ideas for us all and greetings success always for you..Keep update more information..
ReplyDeletepython training in chennai | python training in bangalore
python online training | python training in pune
python training in chennai | python training in bangalore
python training in tambaram |
Wow it is really wonderful and awesome thus it is very much useful for me to understand many concepts and helped me a lot. it is really explainable very well and i got more information from your blog.
ReplyDeleteData Science Training in Chennai
Data science training in bangalore
Data science online training
Data science training in pune
Data science training in kalyan nagar
selenium training in chennai
After seeing your article I want to say that the presentation is very good and also a well-written article with some very good information which is very useful for the readers....thanks for sharing it and do share more posts like this.
ReplyDeletepython online training
python training in OMR
python training in tambaram
python training in annanagar
ReplyDeleteHowdy, would you mind letting me know which web host you’re utilizing? I’ve loaded your blog in 3 completely different web browsers, and I must say this blog loads a lot quicker than most. Can you suggest a good internet hosting provider at a reasonable price?
Amazon Web Services Training in OMR , Chennai | Best AWS Training in OMR,Chennai
Amazon Web Services Training in Tambaram, Chennai|Best AWS Training in Tambaram, Chennai
Your very own commitment to getting the message throughout came to be rather powerful and have consistently enabled employees just like me to arrive at their desired goals.
ReplyDeletepython course in pune | python course in chennai | python course in Bangalore
Inspiring writings and I greatly admired what you have to say , I hope you continue to provide new ideas for us all and greetings success always for you..Keep update more information..
ReplyDeleteexcel advanced excel training in bangalore | Devops Training in Chennai
Thanks a lot for sharing us about this update. Hope you will not get tired on making posts as informative as this.
ReplyDeleteData Science training in Chennai | Data Science Training Institute in Chennai
Data science training in Bangalore | Data Science Training institute in Bangalore
Data science training in pune | Data Science training institute in Pune
Data science online training | online Data Science certification Training-Gangboard
Data Science Interview questions and answers
It was defintely mind refreshing blog.
ReplyDeleteselenium Training in Chennai
Selenium Training Chennai
ios training institute in chennai
Digital Marketing Course in Chennai
.Net coaching centre in chennai
Future of testing professional
Cloud Computing Certification in Chennai
Cloud Certification in Chennai
I really like the dear information you offer in your articles. I’m able to bookmark your site and show the kids check out up here generally. Im fairly positive theyre likely to be informed a great deal of new stuff here than anyone
ReplyDeleteangularjs Training in btm
angularjs Training in electronic-city
angularjs online Training
angularjs Training in marathahalli
angularjs interview questions and answers
I really love the theme/design of your website. Do you ever run into any browser compatibility problems?
ReplyDeletesafety course in chennai
I have to thank for sharing this blog, your interview questions helped me to get job in MNC.
ReplyDeleteRPA Training
RPA Training Institute in Chennai
Blue Prism Training in Chennai
UiPath Training in Chennai
Data Science Course in Chennai
Data Analyst Course in Chennai
Thanks for this wonderful piece of information. I'm glad that I found your post. This is really helpful.
ReplyDeleteWordPress Training in Chennai
WordPress Training Chennai
Microsoft Dynamics CRM Training in Chennai
Corporate Training in Chennai
JavaScript Training in Chennai
Oracle Training institute in chennai
Your good knowledge and kindness in playing with all the pieces were very useful. I don’t know what I would have done if I had not encountered such a step like this.
ReplyDeleteangularjs online training
apache spark online training
informatica mdm online training
devops online training
aws online training
Your good knowledge and kindness in playing with all the pieces were very useful. I don’t know what I would have done if I had not encountered such a step like this.
ReplyDeleteData Science Training in Chennai
Robotic Process Automation Training in Chennai
Cloud Computing Training in Chennai
Data Warehousing Training in Chennai
Dev Ops Training in Chennai
Good Post. I like your blog. Thanks for Sharing
ReplyDeleteInformatica Training in Gurgaon
It’s always so sweet and also full of a lot of fun for me personally and my office colleagues to search your blog a minimum of thrice in a week to see the new guidance you have got.
ReplyDeleteDotnet Training in Chennai | Dotnet Training course in Chennai
Android Training in Chennai |Best Android Training course in Chennai
CCNA Training in Chennai | CCNA Training course in Chennai
MCSE Training in Chennai | MCSE Training course in Chennai
Embedded Systems Training in Chennai |Embedded Systems Training course in Chennai
Matlab Training in Chennai | Matlab Training course in Chennai
C C++ Training in Chennai | C C++ Training course in Chennai
linux Training in Chennai | NO.1 linux Training in Chennai
Unix Training in Chennai | NO.1 Unix Training in Chennai
Sql Training in Chennai | NO.1 Sql Training in Chennai
Nice Blog. every content of this Blog is clearly understand and easily clarifying our queries.
ReplyDeleteData Science Training Course In Chennai | Data Science Training Course In Anna Nagar | Data Science Training Course In OMR | Data Science Training Course In Porur | Data Science Training Course In Tambaram | Data Science Training Course In Velachery
"Your blog is absolutely fantastic and great
ReplyDeleteDigital Marketing Training Course in Chennai | Digital Marketing Training Course in Anna Nagar | Digital Marketing Training Course in OMR | Digital Marketing Training Course in Porur | Digital Marketing Training Course in Tambaram | Digital Marketing Training Course in Velachery
"
The blog you shared is very good. I expect more information from you like this blog. Thankyou.
ReplyDeleteDigital Marketing Training Course in Chennai | Digital Marketing Training Course in Anna Nagar | Digital Marketing Training Course in OMR | Digital Marketing Training Course in Porur | Digital Marketing Training Course in Tambaram | Digital Marketing Training Course in Velachery
Learn Big Data for making your career towards a sky-high with Infycle Technologies. Infycle Technologies is the best Big Data training institute, providing courses for the Big Data certification in Chennai in 200% hands-on practical training with professional trainers in the domain. Apart from the training, the placement interviews will be arranged for the students, so that they can set their career without any struggle. Of all that, 100% placement assurance will be given here. To have the best career, call 7502633633 to Infycle Technologies and grab a free demo to know more.
ReplyDeleteInfycle Technologies, the best software training institute in Chennai offers the No.1 Data Science Certification in Chennai for tech professionals. Apart from the Big Data training, other courses such as Oracle, Java, Hadoop, Selenium, Android, and iOS Development, Big Data will also be trained with 100% hands-on training. After the completion of training, the students will be sent for placement interviews in the core MNC's. Dial 7502633633 to get more info and a free demo.
ReplyDeleteMindblowing blog very useful thanks
ReplyDeleteSpoken English Classes in OMR
Spoken English Classes in Chennai
mmorpg oyunlar
ReplyDeleteİnstagram Takipci Satın Al
tiktok jeton hilesi
TİKTOK JETON HİLESİ
antalya sac ekimi
referans kimliği nedir
instagram takipçi satın al
metin2 pvp serverlar
Instagram takipci
perde modelleri
ReplyDeleteNUMARA ONAY
Mobil Odeme Bozdurma
nft nasıl alınır
ankara evden eve nakliyat
Trafik sigortası
dedektör
web sitesi kurma
aşk kitapları
minecraft premium
ReplyDeletenft nasıl alınır
en son çıkan perde modelleri
en son çıkan perde modelleri
yurtdışı kargo
uc satın al
lisans satın al
özel ambulans