In what context is the use of low-code relevant?
Low-code (and even no-code) platforms promise faster time-to-market for deploying new applications.
This promise is especially valuable as the company’s IT ecosystem must cover an increasing number of areas, in line with the digitization of functions.
However, this ‘model’ sometimes irritates certain IT experts who view it as a false promise.
Digitization is increasingly implemented through cross-functional processes that require the establishment of a data pathway, also known as a ‘data pipeline.’
Let’s distinguish between two scenarios…
1. Deploying high-performance business services
Priorities should be:
- Provide a wealth of natively accessible functions
- Enable business users to govern their domain: define rules, manage permissions, create workflows, etc.
- Allow these users to transform data: correct, import files, export in a corporate format, etc.
- Offer ergonomic, user-friendly screens that can be customized
- Include an easy-to-customize reporting module for analysis and dashboards
- Involve as many users as possible in a common solution, with simplified training for newcomers.
✅ As you can see, low-code is a perfect fit for these goals: it allows for rich, shared, and scalable solutions.
❌ Caution: The native functional richness included in many modules often requires initial implementation by solution experts with real business sensitivity.
The transfer to autonomous in-house teams is usually a second phase following integration in project mode.
2. Building a Foundation for the Company
Relatively agnostic to business use cases, it addresses the challenge of making reliable data available to the right people at the right time, in alignment with operational needs.
Increasingly, companies are deploying their Data platforms, whose primary goal is to replace the Data Warehouse (DWH), but which will also facilitate data sharing across various business departments.
These platforms integrate parallel data pipelines, often each dedicated to a specific business area.
Our priorities are first and foremost to address technological challenges:
- The first is to make data available (quality and accessibility). Data must be queryable and displayed.
- Performance is the second concern, balancing data volumes with the requirements of a timeline often based on an event-driven stream.
- Processing adds value to data: indicators, enrichment by service providers or via Machine Learning.
- Governance, in terms of access rights and traceability, is also a necessary aspect to manage.
- Finally, monitoring and its corollary, ‘observability,’ are measures of a controlled infrastructure.
Conclusion
In contrast to the first case, the implementation of such pipelines within enterprise platforms is the natural domain for Data Engineers and their skill in increasingly sophisticated languages.
Let’s not forget the emergence of generative AI to simplify coding for increasingly complex scenarios.