In-Stream Processing 
Service Blueprint

Blueprint Goals

To provide engineering teams with pre-made, self-deployable cloud infrastructure in order to develop and test real time in-stream processing applications. At the same time, enabling operations teams to deploy, operate and grow enterprise-grade production infrastructure.  Our design goals for the blueprint are as follows:

  • Pre-integrate event queueing, stream processing, data storage, insight delivery and result visualization into a single platform.
  • Support high throughput (up to 100,000 events/second), low-latency (under 60 seconds from event to insight) stream processing
  • Fault tolerant, highly available, dynamically scalable computational platform
  • Programmable in Spark Streaming API in Java or Scala
  • Support algorithms supported by Spark Streaming, including Spark SQL Streaming and machine learning
  • Store up to 30 days of raw data and isights
  • Support in-stream, batch and on-demand insight delivery
  • Composed of 100% free, open source software supported by an active community
  • Cloud-ready and portable across public and private clouds
  • Developer-friendly
  • Production-ready
  • Proven in mission-critical implementations
  • Interoperable with any big data platform
  • Extendable to support new use cases and unique requirements

Read the Blueprint

Post 4. In-Stream Processing Service Blueprint

Post 3. Overview of In-Stream Processing Solutions On the Market

Post 2. How In-Stream Processing Works

Post 1. What is In-Stream Processing?

Subscribe to Our Blog
Contact Us To Learn More

Subscribe To Our Blog

Subscribe to Our Blog

Thank you! Your submission has been received!  

Oops! Something went wrong while submitting the form. Please try again.