BlobCity is a multi-model real-time analytics database. It removes database as a concern from application architectures. It not only processes stored data at high speeds but also processes data in motion during on-going transactions.
docker run -p 10113:10113 -p 10111:10111 blobcity/db
The above command is good for quick testing and sandbox installs. All data stored within the database will go into the container. This is not advisable for production installations.
You can map your external data folders into the container using the following command
docker run -p 10113:10113 -p 10111:10111 -v /mydir:/data blobcity/db
Complete In-memory & On-disk storage engines
BlobCity is designed to cater to a wide variety of requirements. We are the only database that offers two full data storage engines. One in-memory and the second on-disk. Dual storage allows you to split your data between memory and disk, without actually splitting it, or loss of ability to collectively query between disk and memory data stores.
Traditional approaches required such data to be stored in different products. This greatly limited the cross query capabilities while adding significant latency to query executions. Such traditional architectures are no longer suitable for real-time and low-latency analytics requirements.
Hybrid Transactional / Analytical Processing
BlobCity is a fully ACID compliant database falling in the category of Hybrid Transaction / Analytical Processing (HTAP) databases. It offers all the capabilities of traditional relational database systems, with the added speed required for new age analytics. BlobCity gives you the power of a NoSQL, without compromise on any of the features of relational databases. HTAP is a new and actively evolving design for data storage systems, and we are amongst the first in the market to cater to HTAP requirements.
We do not expect our customers to have their data in a uniform format. BlobCity natively stores and processes JSON, XML, CSV, SQL and Plaintext data. This allows your application to collect and process data from diverse sources without the added complexity of converting data to a uniform format prior to ingestion.