Current Use Case is to collect data from CRM service providers like SalesForce &
Microsoft CRM 365 and then transform into meaningful data by logically joining
different entities received and then persisting it into Data Warehouse.
Business Intelligence Dashboards will be developed and integrate it with Data
Warehouse and perform analysis using analytical queries (joining, grouping, sorting
Along with this, the Advanced Analytics Platform will be developed in which Data Science
The team will perform basic analysis first and Then build & train their Machine Learning
Models on top of it for Predictive Analytics & Recommendation Engines.
Create below modules -
1. Data Sources Management
Using this Module, User will be able to configure their Data Sources from which Data
needs to be collected.
Once User configure the Data Sources, Data Ingestion Job will be submitted to Apache
Gobblin and Gobblin will start collecting data from Data Sources.
2. Real-Time Analytics
Once Ingested Data is available on Kafka Streams, Structured Spark Streaming will be
used to process & transform it in a distributed way and write that to MariaDB Column
3. Data Lake
Data Lake will be required as Lot of Data will be ingested and Data Warehouse will be
having cleaned & transformed version of data .
All Data collected from CRM Data Sources will be coming in JSON format . So Apache
Gobblin will convert the json data into parquet format before loading in Data Lake for
better I/O & Low Latency Reads
4. Data Processing
For Data Processing, Apache Spark will be used which is distributed data processing
engine and Data Processing Jobs will be scheduled using Apache Airflow and it will
read latest data from Data Lake and apply required transformations and then persist the
data to Data Warehouse.
5. Data Warehousing
For Data Warehouse, Hive on Minio will be used and File Format will be Parquet. The hive
will act as a MetaStore and Schemas will be defined in it for various tables and Tables
will be pointing to their corresponding Minio Storage Location.
Both Querying Engines i.e. Spark sql on Hive and MariaDb ColumnStore supports JDBC .
So , Any BI Tool can connect to them using Standard JDBC Connections and execute
analytics queries and create various charts/graphs.
37 фрилансеров(-а) в среднем готовы выполнить эту работу за $4403
Hi there, I have checked the details I have great experience with CRM, MySQL, PostgreSQL, Python, QlikView. Please start the chat so we can discuss this job more in detail. Thanks