It’s been nearly a year since my last blog post, so this one will be packed full of what’s been happening with the Bythos platform, as well as new projects for Bythos Labs.
In reference to the post Bythos platform updates, March 2024, the following changes were implemented:
- GitLab - replaced with Gitea + Gitea Actions
- Redis - replaced with Dragonfly and the Dragonfly Operator
- Linkerd - removed as unnecessary burden for private networks
- Backstage - dropped in favor of a custom Go application for ultimate control of the UI layer
Along with the above changes, the following new services were added:
- RenovateBot - integrated with Gitea to create pull requests for application updates
- SonarQube - integrated with Gitea for code quality and security analysis
- Harbor - a private container registry used for custom images (created from Nixpacks)
- ArgoCD - used for custom application deployment into the local Kubernetes cluster, separating infrastructure services (managed by Flux) from user-created applications
- Tailscale - implemented as a subnet router to allow access to PostgreSQL and Dragonfly databases from local client machines for development
Circling back, dropping Backstage in favor of a custom Go application was more about the long-term vision for this project and letting go of quick solutions that would help propel it into the market. It makes more sense to build the UI application for this project in Go to interact with and manage Kubernetes resources.
In order to help develop the UI application, I’ve started playing around with different AI tools. LocalAI is now running in the cluster with many open source models to choose from, such as DeepSeek, Gemma (Google), Llama (Meta), Mistral, Phi (Microsoft), Qwen (Alibaba Cloud), and Stable Diffusion, along with many variants of those base models.
Open WebUI is used as the frontend interface for the LocalAI API (which is OpenAI compatible). This combination provides a great alternative to commercial AI solutions, especially since I can set the context size to a large value (128k currently). Limiting the context size is how they force you into upgrading your plan and spending more money.
On the client/workstation side, I’ve been playing around with the Continue extension for VS Code, as well as Aider and Plandex. At this time, I prefer Continue since it forces me to spend more time refining my approach to each situation instead of just accepting pull requests from AI agents.
The Web3 revolution
I’ll start this section by saying that I have a love-hate relationship with being a business owner. I want the ultimate freedom in building this product and related services without spending all of my time and money on running the business according to traditional customs, especially since that doesn’t exactly fit in with decentralized autonomy.
Furthermore, I’m an engineer and architect by nature and will strive to fit myself into that role for the remainder of my life. This isn’t my full-time job. I have a regular job and work on this project in my spare time for the love of the craft. After the first year rolled around, I needed to pay fees and file taxes for an open source project with no income and no traction (because I haven’t released it yet), which led me to second-guess my decision to incorporate a business. I might as well make the underlying organizational structure as simple as possible and remain independent.
Initially, I formed a C corporation to attract investors to the idea, but I really want the software platform to be community-owned. For now, I’ll just bootstrap it through to launch, which is still years away, funding it with the additional income I receive from my day job. Even after I figure out how to generate income from free software, it should be primarily passive. For anyone else interested in doing that, I’ll explain what I’ve discovered and started to navigate below.
Decentralized CPU, GPU, and storage resources
There are several projects emerging that allow practically anyone to contribute compute and storage resources to global pools, decentralizing online services. Here are some that I’ve found that are geared more toward systems engineers.
- Aethir - aggregates enterprise-grade GPU chips into a single global network that increases the supply of on-demand cloud compute resources for the AI, gaming, and virtualized compute sectors. (Cloud Host Guide)
- Akash - built on Kubernetes, Akash ensures a secure, tested, and reliable platform for hosting applications. The Akash Network functions as a peer-to-peer network comprising clusters of computation nodes, each running Kubernetes. (Provider docs)
- Filecoin - an open-source cloud storage marketplace, protocol, and incentive layer. Filecoin is a decentralized storage network designed to store humanity’s most important information. (Provider docs)
- Fleek - an open-source Edge Computing Platform to accelerate the development and execution of decentralized web services. The Fleek Network is a proof-of-stake protocol that takes advantage of Ethereum for staking, payments, governance, and other economic features. (Node operator docs)
- Golem Network - an open-source and decentralized platform where everyone can use and share each other’s computing power without relying on centralized entities like cloud computing corporations. (Provider docs)
- Internet Computer - the network – created by ICP, or Internet Computer Protocol – is orchestrated by permissionless decentralized governance and is hosted on sovereign hardware devices run by independent parties. Its purpose is to extend the public internet with native cloud computing functionality. (Node provider docs)(Juno)
- Openmesh - leverages Xnode technology to create a truly decentralized cloud infrastructure. Xnode is an all-in-one infrastructure deployment and configuration system within Openmesh’s P2P immutable data network. At its core, Xnode runs on XnodeOS, a custom operating system based on NixOS. (Contributions)
- Storj - the leading provider of enterprise-grade, globally distributed cloud object storage. It is a drop-in replacement for any S3-compatible object storage that is just as durable but with 99.95% availability and better global performance from a single upload. Storj delivers default multi-region CDN-like performance with zero-trust security at a cost that’s 80% lower than AWS S3. (Node operator docs)
Building a home data center
Some of the provider / node operator docs listed above have examples of the hardware, network, and power requirements for joining their network, as well as calculators for potential income that can be earned.
It’s likely that in order to provide a full suite of web services comparable to centralized cloud providers, a node operator will need to host several different protocols, as described in the Akash Network Product Strategy page. I appreciate the Akash Network for its use of Kubernetes and their clarity of vision in needing to partner with other service providers, such as Fleek and Storj. As a node operator, I can build my list of services similar to how a Kubernetes cluster can be assembled from various Cloud Native Landscape projects.
The home data center that I’m building will be mixed use, containing some of the Web3 services listed above as well as public services running on the Bythos platform. I may decide to colocate some hardware at a local data center, but I’ll design enough resilience into my own systems that uptime and availability will be comparable to big tech. Ultimately, it’s up to the blockchain protocol and web application developers to embed that resilience into their software. In other words, don’t rely on a single node operator (or cloud company) to provide all the services for your application.
In the Akash Network Roadmap, there’s an item named Akash at Home, which outlines a plan for a production grade edge data center at home. With an initial investment of only $2M required for their model, it’s something that I can only dream of. However, I’m on a mission to build a poor man’s version of that and become profitable. More details about what I’m building will be published in future blog posts.