Digibee Platform Overview
Digibee’s low code platform drastically reduces the complexity of integration environments and enables you to digitally transform your legacy systems 10-times faster than other systems.
The Digibee hybrid integration platform drastically reduces the complexity of your integration environments and enables you to transform your legacy enterprise systems 10-times faster than other platforms.
Today’s integration teams need access to a host of tools and business cases that allow them to mix traditional and modern integration styles. Digibee has the most important features you need to increase production and reduce time to market.
Interface and resources to design your integration
Digibee has a user-friendly and powerful, low code platform UI that allows you to create pipelines* by dragging and dropping components on the canvas and using forms to configure them, drastically reducing implementation time.
*Pipeline is the name used by Digibee for each integration flow.
Native connectivity and transformation capabilities
Components are the fundamental building blocks of a pipeline. Our platform provides a wealth of native components including:
- Connecting to REST and SOAP Web Services endpoints
- Advanced data transformation
- Data structure conversions
- XML to JSON
- JSON to XML
- JSON String to JSON
- CSV to JSON
- Data streaming
- Database connectivity (SQL statements and procedures)
- Message broker
- File operations
- Cloud-based storage services:
- Google Storage, AWS S3, Google Drive, One Drive and Dropbox
- ERP integration:
- Oracle E-Business Suite
- MS Excel integration
- File exchange:
- FTP and SFTP
- Email Sending
- User Management and Authentication
- Basic Authentication
- OAuth 2.0
- Credentials Vault
- Symmetric and Asymmetric
- Digital signatures
- Event Publisher
- Relationships – mapping identifiers between different systems
- Object Store – ability to temporary store data in the Platform for multiple integration use-cases
- Robotic Process Automation scripts
- Structure and data message validators
- Conditional processing
Capsules are reusable components that any user of the low code -platform can create by applying the same visual development model conceived in the pipeline creation. It allows the definition of integration flows that will be published in the components palette for later use.
With capsules, you can offer pre-packaged business logic that can be used by internal teams, customers and partners.
Whenever a business process is implemented, keeping data consistency across systems is a challenge. For instance, a product is represented in different ways by an ecommerce system and a warehouse management system (different IDs, different attribute names); still, they represent the same physical product that needs to be delivered to a customer. The Digibee Relationship Management feature allows you to create mappings between different systems, providing both data and process consistency, while greatly simplifying pipeline creation.
When building integrations that touch multiple systems, it’s not uncommon to have to rely on data staging areas, such as temporary tables. The object store lets you insert, update, search and delete JSON documents within collections, providing much needed functionality in a structured, efficient and easy-to-use manner. A common usage pattern for the Object Store is implementing a transaction queue, where transactions are saved as documents, so that they can be sequentially processed as needed.
Replicating a process across a large chain of stores or organization facilities is a significant integration challenge, not to mention maintaining consistency and a low cost of ownership. Multi-instance pipelines solve this problem -they consist of a single pipeline with a parameter map that holds all specific information for stores or facilities that need to be integrated. Any process changes are made only to a single pipeline, but deployments can be made separately for each store, providing resilience and making it easier to expand the customer operation in a managed, monitored way.
Every pipeline needs a trigger for its execution. The Digibee HIP offers many different trigger types including:
- exposing the pipeline to direct API call (Rest or HTTP endpoint);
- scheduling it for recurring execution (Scheduler);
- associating the pipeline to a registered event, allowing for asynchronous execution – more details on the Events session;
- configuring the pipeline to listen to a JMS topic or queue (ActiveMQ and OracleAQ);
- listening to messages from a RabbitMQ broker;
- consuming messages from a Kafka topic;
- monitoring emails in an IMAP mailbox.
Digibee’s Platform was designed according to the event-based paradigm – meaning, pipelines generate and consume events, creating a fully asynchronous and resilient environment. Events can be managed, correlated and re-executed according to specific customer business needs.
Safe, agile testing
Every pipeline can be executed in test mode, calling real-world endpoints and systems, while providing execution logs and messages from within the pipeline design canvas. This feature enables fast validation, making it easier to make adjustments and corrections without the need of redeploying the pipeline to an execution environment.
Native, non-optional versioning generates minor versions for small changes and major versions for changes that impact the pipeline inputs and outputs. This strategy preserves the version integrity and enables future pipeline evolution.
Sensitive information masking
Data that should not be exposed can be tagged as sensitive data, so it is obfuscated in every platform output (logs and messages).
Every operation is audited and stored securely by the Platform, so it can not be inappropriately changed.
Deployment is made in seconds: choose the environment (Test or Production), select the pipeline version and the deployment size. The low code platform will create a corresponding number of pipeline replicas according to the selected deployment size, enable monitoring and publish the pipeline, making it immediately available.
A pipeline can be deployed on non-production environments, making it easier to validate it in runtime. When a pipeline is ready, all you have to do is to deploy it in the production environment.
The Digibee Hybrid Integration Platform is 100-percent cloud native, fully based on containers orchestrated through Kubernetes. A pipeline is deployed through replicas, which are identical execution instances, yet logically isolated into Pods. When an event or a request associated with a pipeline is detected, an available replica will pick it up for immediate processing.
In the event of a replica experiencing some critical error that prevents it from successfully finishing the request processing, the request will be handled by another available replica and processed. The misbehaving replica will be automatically recycled, so it is back online, waiting for new requests.
The low code platform architecture promotes process isolation, which means a running replica does not impact performance or stability of any other replica associated, or not, with the same pipeline.
The platform architecture allows adjusting the number of replicas for the pipeline to be implemented, given the request’s specific processing requirements. Each replica in our SaaS environment runs in separate zones.
From the moment a pipeline is deployed, monitoring is automatically activated – no human intervention is needed.
The Platform dashboard provides graphical representation of the pipeline’s execution behavior: deployed version, average execution time, errors and execution dynamics, as well as access to execution logs.
Each execution generates detailed logs, providing execution time, request and response pipeline messages. Additional logs can be created by the pipelines to support specific business needs and requirements.
The platform generates events that represent pipeline specific conditions. These events can be viewed and sent to third-party ticket management and monitoring solutions.
Versioning and history
From the pipeline version history it is possible to generate a new version of the pipeline, which can be tested and evolved in the Test environment, while the production version continues to run, unaffected.
Coexistence and evolution
The Platform architecture allows different major versions of a pipeline to be running simultaneously in production, enabling adoption of coexistence and zero-downtime strategies.
Ready for Agile teams
The Digibee Hybrid Integration Platform unleashes the benefits of Agile development. Its intuitive UI turns rapid prototyping and componentization into an agile reality, so teams can move forward without being impacted by interdependencies that, most of the time, prevent them from delivering on their commitments.
Differentiated support model
The Digibee Hybrid Integration Platform creates an environment that enables building integrations collaboratively. Our support model creates opportunities for agile teams to interact with our integration consultants in real-time, either to get the answers they need, discuss best practices, or to get support to develop and implement their critical integration processes.
In addition, Platform information is publicly available and customers can subscribe to our status page and be notified in case of any Platform component unavailability.
The Digibee HIP is a 100-percent cloud-native platform. It runs on Kubernetes, a proven execution platform that provides huge resiliency and scalability.
The Pipeline Engine
The pipeline engine is the core element of the platform – think about it as Integration Runtime – and is designed and extensively tested to deliver performance, resiliency and reliability, so you can seamlessly execute your business processes. The pipeline engine is responsible for executing all the deployed integrations.
More about the Digibee HIP’s inner workings
Every integration is interpreted by the Pipeline Engine and executed in isolated containers, meaning each pipeline has a dedicated CPU and memory capacity allocated to its execution. This functionality prevents a misbehaving integration from impacting other integrations’ performance and stability.
Thanks to Kubernetes, when a problematic execution is detected, the offending integration is automatically restarted to be up-and-running again in a matter of milliseconds. To ensure high fault-tolerance, every integration pipeline is executed on multiple availability zones.
This is very different from traditional integration solutions such as ESBs, where all integrations share the same execution context.
The Pipeline Engine has all the necessary code to execute any component available in our low code platform. There is no need to write additional code to connect to a supported technology.
We offer components for message processing, flow control, web protocols support such as SOAP eREST, file manipulation, data manipulation for both relational and noSql databases, security, cryptography and many others.
The Pipeline Engine itself can’t communicate with the external world, it needs a trigger to invoke it. The low code platform offers a great variety of triggers such as API/REST, Event-based, Scheduled, Message Queues, Email and HTTP.
For both components and triggers, we have a dedicated development capacity for creating new versions to continuously expand the Platform capabilities and provide increasing support for more enterprise integration scenarios.
To better illustrate the trigger concept, let’s walk through some use scenarios:
A company wants to offer its developers specific services through API calls. In that case, the integration pipeline needs be configured with a REST trigger so, when it is deployed, an endpoint will be exposed, so users can call it. Requests submitted to this endpoint will then be forwarded to the pipeline for processing.
An integration reads files on an SFTP folder, processes them and inputs its data into a system for statutory reports generation – and it needs to run once a day. This can be accomplished by configuring it with a schedule trigger so its execution is programmed to run every day at 1:00 AM.
An integration needs to process all messages posted to a specific queue or topic of a message broker software (think RabbitMQ, Kafka). All it takes is configuring the integration pipeline with a Message Queue trigger, so it will process messages as they arrive.
The Managed Queue
We have implemented a native queueing mechanism, so whenever a trigger receives a request or is scheduled for execution, it puts a request on the corresponding pipeline message queue and that pipeline is activated as soon as the message arrives. If the integration fails to process the message, and is automatically restarted, the message is not lost – it catches up and processes the pending messages when the pipeline is back online. This provides a high level of resilience, making the platform virtually immune to execution failures.
Putting it all together
Now we are going to follow a transaction from the moment the customer submits it, until the moment the platform provides a response. Let’s assume the transaction is an API call.
The low code platform offers a built-in API gateway, protected by cloud provider security features to avoid any attacks such as DDoS. As soon as the transaction goes through the provider edge infrastructure, it is received by our gateway that routes the message to the corresponding trigger – in this case, the REST trigger.
The message is posted on the pipeline queue. The pipeline receives it and processes it according the designed processing flow – it can manipulate the message, make decisions based on its contents, transform it and enrich it with data obtained from other sources. For enhanced performance and functionality, the low code platform also has dedicated advanced caching services and a temporary object storing system, which is called Object Store.
After the message is processed, the pipeline provides the response to the submitted request.
Designing, Deploying and Operating Integrations from the User Perspective
All this process can be monitored through the Platform portal. The portal enables users to:
- build pipelines by using the pipeline canvas, that lets you drag and drop components and draw the integration flow;
- test pipelines through the integrated test-mode functionality;
- deploy pipelines to Test and Production environments;
- monitor transactions and message contents;
- access the audit logs.
Every credential necessary to access the systems that needs to be integrated are stored in the platform credentials vault. Once a credential is created it cannot be directly read by anyone – it can only be accessed by the pipeline during runtime. This strategy prevents direct access to credentials, providing exceptional security and governance: customers can create credentials and share them to pipeline developers for building purposes only – the credentials content will not be available for visualization nor edition.
Connectivity to customer environments
To be able to access resources that are within internal customer networks we offer dedicated VPN gateways that are completely isolated through network policies.
The Platform has a series of security controls including:
- audit for all administrative actions
- 2FA for platform access
- complete user lifecycle management with responsibility segregation
- ability to integrate user management with customer’s own Active Directory or others
- best practices for endpoint exposure by using IPS and WAF mechanisms of the major cloud providers
- 24×7 monitoring
- real time incident reporting via statuspage.io
- CPU and memory reservation for each pipeline
- infrastructure isolation for each pipeline
All pipelines can be enhanced with security to accommodate the business and technical needs with authentication with the following market standards:
- sensitive fields
- password Management
- rate limiting
- IP Restriction
- payload sizes
LEARN MORE ABOUT THE PLATFORM
Check out our documentation to learn more about the Digibee Hybrid Integration Platform and modernize your legacy enterprise platform.
Ready to learn more? Contact us and we’ll talk about future proofing your integration systems.
1398 SW 160th Ave, suite 106
Sunrise, FL – 33326
WeWork – The Hub
3601 Walnut Street
Denver, CO – 80205
9ª andar, Vila Olímpia
São Paulo – SP