Reclaim Cloud – Reclaim Hosting https://www.reclaimhosting.com Take Control of your Digital Identity Wed, 27 Mar 2024 09:51:50 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.4 https://www.reclaimhosting.com/wp-content/uploads/2018/09/RHprofilelogo-100x100.png Reclaim Cloud – Reclaim Hosting https://www.reclaimhosting.com 32 32 That Mathers Aesthetic! https://bavatuesdays.com/that-mathers-aesthetic/ Wed, 27 Mar 2024 09:51:50 +0000 https://bavatuesdays.com/?p=30113 Continue reading ]]> The great Maren Deepwell (who Reclaim Hosting has been lucky enough to work with after her long stint as ALTs brilliant CEO) has created a visual anthology celebrating 10 years of Reclaim’s art. It’s a very cool video, and I highly recommend you partake in the celebration 🙂

I’m really proud of the brand we’ve been able to forge over the last decade. Trying to make a relatively boring product like web hosting compelling has been one of the funnest elements of the job. What’s more, getting to partner with the entire Reclaim team and the likes of Bryan Mathers to do it has been the real gold.

Gold Record plan for the newly launched Reclaim.Press

Bryan will quickly deflect any praise or compliment right back at you taking little to no credit for how much his own visual storytelling shapes our identity by capturing a raucous sense of cultural play without sacrificing a deep, company-wide commitment to an open and independent web.

Bryan Mathers early vision for the indie edtech record store with snide proprietor and all 🙂

There’s a fine line to walk between tongue and cheek references and an underlying commitment to what can be referred to as indie edtech or trailing edge technology. A belief that the new and shiny can often obfuscate the long history of edtech, but also erase any sense of its embedded cultural history that impacts ours daily lives.

The iconic, industrial visual label of Reclaim Cloud

The link between edtech and vinyl or edtech and an independent record store is not only playful, but also an argument for the production of online culture and the value of an independent space to do it. Reclaim understands itself outside the homogenizing box-store mentality of massive social media sites and hosting services, much like independent record labels such as Dischord that are anathema to the mainstream business of music. It’s about community, it’s about focus, and it’s about controlled, responsible growth. These are so many of the understated, subliminal messages in the Mathers Aesthetic that are directly linked to an ethos of indie edtech.

The ds106 album cover where Giulia Forsythe remixes Raymond Pettibon that in many ways pre-figures Bryan Mathers remixing

I don’t think it would be an overstatement to say that Bryan, however inadvertently, has become the Raymond Pettibon of indie edtech, and his aesthetic permeates far beyond the confines of Reclaim Hosting’s “album covers.” If I was being selfish I would refer to all the great work he has created for us over the years as the “Reclaim aesthetic,” but if I were to be honest it’s more appropriate to say “that Mathers aesthetic!”—and we have just been lucky enough to be early to the party. Go ahead, try and create your online identity with an AI prompt, we’ll be over here working with the artists.

]]>
Reclaim Hosting: the Site on the Edgeport of Forever Uptime https://bavatuesdays.com/reclaim-hosting-the-site-on-the-edgeport-of-forever-uptime/ Wed, 13 Dec 2023 12:31:55 +0000 https://bavatuesdays.com/?p=29945 Continue reading ]]> This post was cross-posted to Reclaim Hosting’s new company blog “Reclaim the Blog,” so you can read this post there as well.

Screenshot from Star Trek Episode "The City on the Edge of Forever"

Are we ready for internet time travel with 100% uptime?

To be clear, forever uptime is a dangerous claim, and anyone that promises you 24/7, 100% uptime is flirting with disaster in the hosting world. That said, my experimentation with Edgeport—a new enterprise-grade DNS, CDN, and Load Balancing service much in the vein of Cloudflare—has moved beyond this blog and has extended to Reclaim Hosting’s main website: https://www.reclaimhosting.com.

As already noted, I was blown away by the fact that even with both containers that drive this blog completely offline, the site loaded without issue for the better part of nine hours. It could’ve gone on much longer, but I had to re-enable the servers to actually write about the amazingness of it all 🙂

What was driving the uptime, regardless of the servers’ health, was the application delivery network, or ADN, which reproduces and caches not only the static site, but also its dynamic elements (search, page loading, etc.) across a vast network of servers that ensure the content remains online even when the underlying infrastructure goes offline. It’s pretty amazing to me, and it makes one flirt with that elusive and seductive portal dream of 100% uptime, even though one must always account for the imminent entropy of any system.

Screenshot from Star Trek Episode "The City on the Edge of Forever"

www.reclaimhosting.com boldly going where no site has gone before!

But that being said, Reclaim Hosting has now gone where only the bava has boldly gone before it 🙂 The implications for our high-availability ReclaimEDU WordPress multi-region hosting is truly next generation. While we will refrain from promising 100% uptime, with fail-over between two servers (because Edgeport does that, just like Cloudflare), a robust content delivery network, and  CNAME flattening, we are able to post a lot of .9999999999s. With Edgeport we can harness all the benefits of the Cloudflare setup we engineered a year ago, but using a simpler interface and more approachable and affordable service.

But beyond the load-balancing and sophisticated application caching going on, the real power of Edgeport lies in the manifold security improvements it provides. Over a year ago we hired Noah Dorsett, who has proved to be an amazing addition on the Reclaim security front, and I asked him to try and boil down some of the features Edgeport offers for a meeting on high-availability hosting I was taking last week. So, in true Noah fashion, he did an awesome job and provided an understandable, succinct highlight of the security affordances Edgeport provides. Here is what he sent me:

DDOS Protection: The application layer distributed denial of service protection is great for hosting web applications, as these live in this ‘application layer’. Layer 7 DDOS attacks target this layer specifically as this is where HTTP GET & POST requests occur, and can eat up large amounts of server resources. These attacks are very effective compared to their network layer alternatives, as they consume server resources as well as network resources. With Application Layer DDOS, your site would be much more secure.

WAF:  A WAF, or web application firewall, helps protect web applications by filtering and monitoring HTTP traffic between a web application and the Internet. It typically protects web applications from attacks such as cross-site forgery, cross-site-scripting (XSS), file inclusion, and SQL injection, among others. This type of firewall exists in the application layer, acting as a ‘shield’ between your web application (aka website) and the internet. Edgeport uses a dual WAF, which can be a confusing term. What this means is that there is an audit WAF that logs traffic and updates rules, but does not perform blocking. This audit WAF passes information to a production WAF which uses this information to actively protect and block malicious requests/attacks against the website. A dual WAF is much faster than a regular WAF, and provides better security to boot. WAF rules are generated by Edgeport’s dedicated security team as well, which means your rules will always be up to date and performing efficiently.

Bot Management: Edgeport uses an agentless, server-side, machine-learning fueled bot management system to detect and stop bot traffic that could be slowing down your site or maliciously scraping content. The benefits of an agentless, server-side system like this is that you don’t have to run any code or do anything on the client end, and the detection is nearly invisible from a user perspective (and to bots as well). This allows the detection rules to catch more and impact performance less, keeping the website secure from all sorts of automated malicious tools and scrapers.

Image of bavatuesdays traffic over the previous month

You can literally see the moment, in terms of bot traffic, when I turned on the bot management tool in Edgeport

That last bit on bot management is a big difference I immediately noticed between Edgeport and Cloudflare. Whereas my daily traffic on Cloudflare clocked anywhere from 5,000 to 6,000 hits per day, when I moved to Edgeport those statistics dropped dramatically, closer to 1,000 to 2,000 hits per day. That’s not only much more in the realm of believability of actual traffic for this humble blog, but it highlights just how many bots had been regularly scraping and otherwise accessing my site, whichis not only a security risk, but also eating up unnecessary resources. So with Edgeport my site not only is safer, but is less resource intensive, and as a result more performant.

Now, to be clear, running Edgeport on my blog might be a bit of overkill given it does not need to be up 24/7 and it does not have the sensitive data and security needs of an institutional .edu site, for example. But if you are running a mission critical, high-availability site for your institution, then Edgeport opens up a whole new world of cloud-native security on top of the industrial-grade DNS, CDN, and load balancing service that are truly a powerful combination. It has provided Reclaim exactly what we needed for scaling our multi-region setups, and I couldn’t be more thrilled there’s a new player is this field that’s pushing the envelope, and opening up possibilities for smaller companies like Reclaim Hosting with infinite heart but finite resources.

]]>
bava on the Edge https://bavatuesdays.com/bava-on-the-edge/ Fri, 17 Nov 2023 11:34:02 +0000 https://bavatuesdays.com/?p=29647 Continue reading ]]>

On the edge, I’ve been there
And it’s just as crowded as back home.

Dag Nasty, “La Peñita”

Yesterday I did a little experimenting on the good old bava.blog to test the notion of application delivery networks (ADNs). You probably have heard of Content Delivery Networks (CDNs) wherein static content is delivered via caches all over a service’s global network (most popular being Cloudflare). Well, in this new acronym, beyond the content the whole application itself is cached across the network, so when one (or in my case both) servers driving the bava go down, the site is unaffected, it begins to deliver the application itself through the network. Which means not only high availability, but virtually guaranteed 100% uptime.* I found it hard to believe, and I have been looking into edge computing thanks to Phil Windley’s recent post, but this was my first exploration of the concept.

Our cloud hosting at Reclaim Cloud is driven by the software developed for Jelastic, which was bought by Virtuozzo. It has been something we’ve been pushing pretty hard on with not only apps well beyond the LAMP stack, but also containers and the wonderful work of Docker, which in turn led us to start building a dedicated WordPress service on top of performant, affordable containerized WordPress hosting: ReclaimPress. As I’ve been working through ReclaimPress, I was shown the tool/service Edgeport. Very much positioned as a simplified, easy-to-use Cloudflare competitor, EdgePort was designed as a security-first, cloud-native Web Application Firewall with a global network that delivers applications dynamically, even when the origin servers are off. Their DNS options are an affordable alternative to Cloudflare for similar plans, which has been a key factor for me. To get in the door for enterprise at Cloudflare is somewhere in the ballpark of $3,000 a month (which the condescending Cloudflare sales agent was sure to remind me), whereas all the features we need–many of which are Cloudflare enterprise only—are part of a $199 a month plan at Edgeport. What’s more, I have not seen anything like ADN delivery networks at Cloudflare, so we now have a viable, affordable alternative to Cloudflare which can do even more. That makes me very happy.

I can harness a globally cached network, as well as load balancing fail-over, and the emergency backup of applications being cached and delivered in their entirety from the network (whether or not my servers load), and that is not even including the vast security tools that I have to dig into with Noah in more detail. It seemed like magic, so I spent much of yesterday testing it on this old blog.

I turned off both servers in the failover setup at 10:59 UTC and then powered them back on at 19:48, so just under 9 hours of downtime that did not stope a single page or post from working cleanly on my site.

Image of Log for when the servers were turned off and then back on

Log for when the servers were turned off and then back on

I had Antonella try and comment and that was not successful, and never thought to try logging into /wp-admin area, given it would seem impossible, but maybe not?  Will return to that, but perhaps comments and posting do work in an ADN?†

Regardless, it was fun to occasionally search for blog posts that I hadn’t read in years, and see them load without issue, even though both servers were down.

This comes at an amazing time at Reclaim when we’re going into our second year of stable, solid .edu hosting for a number of schools, and adding this possibility for not only guaranteed uptime, but increased vigilance and next-level cloud-based security is pretty thrilling. I really want to get out on the presentation trail again and talk this through because more and more these leaps in infrastructure are something we have been just able to almost keep up with, but this one almost feels like we are not only well-positioned to offer it, but maybe even early to the party.

Reclaim4life, small and limber is beautiful!

_________________________________________________
*With the caveat that is an imagined Shangra-la if you push hard enough on the idea.

†Turns out they cannot make the database writable in the ADN, so it is read only. They mentioned it is technically possible, but not legally—which makes sense when you think about it in terms of security and spoofing, and then there is the whole issue of syncing back changes. It might make sense, if only for practical purposes, to keep everything write-only during any extended downtime.

]]>
Containerizing Mastodon on Reclaim Cloud https://bavatuesdays.com/containerizing-mastodon-on-reclaim-cloud/ Sat, 28 Oct 2023 06:54:58 +0000 https://bavatuesdays.com/?p=29503 Continue reading ]]> Taylor and I did a stream yesterday wherein we (royal) set about taking one of my experimental Mastodon instances (bava.social) and moving it from a Debian VPS into a Docker container. We got the bug after a recent open source trifecta update I posted about, and I am glad we did. Long story short, the genius of Docker is how it makes hosting more complex server setups like Mastodon simple (as with Azuracast and PeerTube), which in turn will make running and maintaining these open source systems that much easier, hopefully leading to wider adoption. There is still the question of cost, but I have ideas there as well, but that is another post.

Anyway, the 45 minute stream was a roaring success! We were able to get bava.social off the VPS and onto the 1-click installed Docker instance running on Reclaim Cloud. It must have been a proud moment for Taylor to see all his work on that installer and increased familiarity with Docker come together in a seamless moment of everything just works. Anyway, if you are into this kind of stuff, the stream has been recorded and I embedded it above.

]]>
Notes on WP Offload Media https://bavatuesdays.com/notes-on-wp-offload-media/ Mon, 14 Aug 2023 08:38:19 +0000 https://bavatuesdays.com/?p=29001 Continue reading ]]> At Reclaim Hosting we’ve been exploring plugins for offloading media from larger WordPress instances to object storage such as AWS’s S3 or Digital Ocean’s Spaces. There are a lot of good reasons to do this:

  • First and foremost, we want to save space on Reclaim Cloud servers given a big WordPress Multisite can eat up a ton of space on uploads alone, and that’s less than optimal given those servers are dedicated with fixed storage for optimal performance.
  • If we have to migrate a larger WordPress site, having the files in cloud-based object storage makes that process quicker and easier given no media files need to be moved
  • Offloading WordPress media helps ensure there are no issues serving media for a multi-region setup that is distributing traffic across several servers
  • And, ideally, it is faster given you should be able to server the media through a content. delivery network (CDN) like Cloudflare that stores/caches media across its vast network making assets quicker to load

So, with the why out of the way, the next question is the what? As usual, I am using this, the  bav.blog, as the initial test run for a single WordPress instance. I’ll be offloading the 10GB of media in the uploads folder on this blog to AWS’s S3. Simulataneously, I’m using ds106 as a test for a small-to-decent sized WordPress Multisite instance to offload media as well, so this is a two-pronged test.

Now to the most interesting question, how? I did some ‘research,’ and WP Offload Media plugin seems to be the most fully-featured option available. For our purposes we are using the Pro version given we need many of the advanced features and plan on rolling this out more broadly for some of our bigger WordPress sites.

Screenshot of the WP Offload Media Interface

WP Offload Media Plugin Interface

It has integrated tools to help you setup your storage across several S3 service provides, including AWS, Digital Ocean and Google. This is where you instructed how to add your S3 keys, define the bucket, and control the security settings for the bucket.

Screenshot of Storage Provider interface for WP OPffload Media

Storage Provider interface for WP OPffload Media

One important thing you need to do if you want the S3 files served over a domain alias is to make sure to name the bucket the same as the alias domain you want to serve files over. For example, for bavatuesdays the files will be served over files.bavatuesdays.com so that is also the name of the S3 bucket files.bavatuesdays.com:

Screenshot of WP Offload Media Bucket Interface

WP Offload Media Bucket Interface

Screenshot of WP Offload Media Security Interface

WP Offload Media Security Interface

Once your bucket is connected, you can then use the tools available to offload your media to the new storage provider. I found this step took quite a while for bavatuesdays. I had 6,000 files to copy over, and that took several days to complete. It worked, although about 6% of the files had issues that were linked to being in new locations.

I was wondering if I could use the s3cmd tool to sync all the pre-existing files over to the S3 bucket, and then serve the media from there using this plugin without having to go through the offload media tool for previously uploaded media given how long it took on bavatuesdays. I used S3cmd to sync the media files for the ds106 multisite before offloading, and while all the media moved over cleanly, I still needed to run the offload tool through the WP offload Media plugin. While took forever on bavatuesdays, as noted already, it did all 11,000 files for ds106 in about 15-20 minutes, so I am wondering if syncing all files to the S3 bucket beforehand made offloading faster or if the fact bavatuesdays is a the multi-region site caused issues that slowed it down on that site. I’m not sure, but I will definitely have to figure that piece out.

Also worth noting is that the WP Offload Media plugin worked cleanly for the ds106 multisite with various subdomains and at least one mapped domain, which is very good news.

One issue I ran into on bavatuesdays was linked to the fact that I had used the WP Offload Media Lite plugin a few years back as a test. It ran for about a year or so and then I turned it off, but when I re-activated the pro version using a new bucket, namely files.bavatuesdays.com, the database had already stored details about the old bucket  bavamediauploads I’d setup previously, so there were some issues. I had to manually remove media from the old bucket through the Media Library and then offload it again to the new bucket. The plugin adds tool to the Media Library for removing and adding media to the S3 bucket, which is nice, and it is possible to bulk select the removal and offload options, so it was not too painful, but this is something to lookout for if you have previously used the plugin.

Once I had all the media offloaded cleanly, it was time to test running the delivery through Cloudflare. First thing to do is create a CNAME domain alias to map to the bucket in order to deliver media over the subdomain files.bavatuesdays.com. To do this you create a CNAME in Cloudflare and the value you add is files.bavatuesdays.com.s3.amazonaws.com with the first part of the target before the S3 reflecting the domain alias.

Screenshot of Adding CNMAE for domain alias in Cloudflare

Adding CNAME for domain alias in Cloudflare

Once that is added go to the delivery section of the plugin and select what provider you will be using, as you might have guessed this blog is using Cloudflare to harness their CDN for faster media delivery.

Screenshot WP Offload Media selec t Delivery provider Dialogue Box

WP Offload Media selec t Delivery provider Dialogue Box

After that, toggle the “Deliver Offloaded Media”  button to start delivering media from the S3 bucket, and also toggle Use Custom Domain Name option to enables the alias files.bavatuesdays.com:

Screenshot of WP Offload Media Delivery Options

WP Offload Media Delivery Options

I also selected Force HTTPS, but not sure that is making a difference given that is already happening on Cloudflare. Also, there’s a private media settings error that I think is mainly linked to AWS’s Cloudfront option, so not sure if it is relevant for this setup through Cloudflare, but will need to verify as much.

I’m also using a similar domain alias on ds106.us as well (files.ds106.us), and it worked identically for the WordPress Multisite setup and is functioning without issues as far as I know. I still have to test some more sophisticated permissions setups for serving media, but all-in-all, I think this plugin really moves us towards being able to offload a significant amount of stored media on our dedicated servers to S3 buckets which should free up terabytes of data on our Cloud infrastrucuture, which would be a gigantic win!

]]>
Reclaim Cloud in Europe! https://bavatuesdays.com/reclaim-cloud-in-europe/ Mon, 24 Jul 2023 12:53:01 +0000 https://bavatuesdays.com/?p=28927 Continue reading ]]> Image of the European Union flag

EU Region in Reclaim Cloud

It’s official, as of today we now have an active European Union (EU) region in Reclaim Cloud, located in the partynacht central of Berlin, Germany. This has been a long-time coming and we’re thrilled to expand the geographical scope and reach of Reclaim Cloud to the European Union. Not only will this provide additional server nodes in our broader cloud cluster, but it will also help make it easier for existing clients to remain compliant with GDPR regulations.

Screenshot of a dialogue box for installing Azuracast in a EU region on Reclaim Cloud

Azuracast in the EU Region!

With each region added comes a significant investment of time and resources, so it’s very rewarding to see Reclaim Cloud grow from the initial two (US East and West) to now five regions in just under three years. It’s been a big year for Reclaim’s infrastructure to not only expand the cloud, but also shore up security while preparing for an underlying kernel migration. No rest for the weary!

Image of D.R.I.'s album "But Wait, there's more"

DRI’s “But wait…there’s more!”

“But wait….there’s more!” Over the coming weeks and months we’ll be unveiling an entirely new service for high-traffic, high-availability WordPress sites in the Cloud: Reclaim.Press. Stay tuned for more!

]]>
Reclaim Cloud’s 1-Click Mastodon Installer https://bavatuesdays.com/reclaim-clouds-1-click-mastodon/ Fri, 23 Jun 2023 11:01:19 +0000 https://bavatuesdays.com/?p=28593 Continue reading ]]> Creating a couple of videos highlighting Taylor’s 1-click Mastodon installer for Reclaim Cloud has been on my to-do for too long, so this week I knocked it out. I did two quick videos, one taking you through the basic install. While the installer is a Docker container and most of the heavy lifting is done for you, there are still some manual pieces like pointing a domain, creating an admin account, and restarting the container. Taylor’s guide goes through these points in details, so this is really just a video supplement to the docs.

The follow-up video is focused on where and how to update the environment variables in the .env file. You use the .env file to add details for transactional email like Mailgun, as well as to point the media storage to a third-party S3-compatible service like Digital Ocean’s Spaces. Once again, this video servers to reinforce the guide we already have for doing this, so if the video fails you turn back to the guide.

The final piece are demonstrating the simple set of commands to upgrade Mastodon version. I am working with Taylor to make sure that is working as expected, and once I do I will be sure to finish off this trilogy of Mastodon 1-click awesome.

]]>
A running list for the Domains Package https://laurenhanks.com/a-running-list-for-the-domains-package/ Thu, 04 May 2023 19:22:37 +0000 https://laurenhanks.com/?p=5788 One thing I’ve been thinking about recently is how schools can successfully run WordPress Multisite, Domain of One’s Own, and Reclaim Cloud Sandbox spaces together in a way that feels integrated and seamless. We’ve always led with the idea that these tools don’t compete with each other, and that actually the opposite is true: by running them in parallel to each other you can offer a little bit of something for everyone. Perhaps even in tiers or layers as described in my Nashville recap post from 2021. But how can we do that while still keeping the digital footprint for landing pages and end user sites as simple and intuitive as possible? I last explored this in my blog post called A New Model for Domains: DoOO & WPMS and shared how some schools like Coventry University and Oklahoma University are directing traffic and handling domain structures for landing pages and end user sites (which can feel like half the battle).

I love how some of our DoOO and WPMS schools are controlling growth on these platforms, as well as keeping things sustainable, by pushing all new signups to the WordPress Multisite by default. The WPMS then has a very limited set of plugins and themes that are easy to support and maintain for a large group of users. From there, if an end user wants to install a different theme, or explore a different application entirely, they’re directed to Domain of One’s Own. There’s more freedom here, but it likely involves a request form submission or a conversation with an admin before a cPanel account is granted. What’s ultimately happening now is that there are two paths for a user to take. And especially if we’re looking to add a third (Reclaim Cloud for next generation apps or sites that need more resources) it’s important for Reclaim to assist schools with correctly carving out these paths and creating very clear entry points.

This concept has come up in so many different conversations ranging from the visuals and metaphors we use to explain different topics, to how we’re articulating it in support scenarios, to how we’re providing more data for admins to make decisions, to how we’re pulling in these tools to help users choose the path that makes the most sense for them. We’ve been working on a few side projects to help with these scenarios, and now it feels like the right time to compile everything together.

When a new school comes to Reclaim to set up DoOO, WPMS, and the Cloud, I want them to have a cohesive menu of things that they can select or add to their setup to make it work to their preference. I’ve alluded to this with support articles like Domain of One’s Own Setup Features, which covers different signup workflows and cPanel customizations available for DoOO so a new admin can go through and decide what they’ll need. Even still, this article doesn’t quite capture everything that’s available in DoOO anymore, and it definitely doesn’t pull in WPMS & Reclaim Cloud. Where this “menu” lives or how it’s delivered is still a question mark (maybe as simple as adding in a few more guides) but for the purposes of this post I want to share a running list of some of the other projects we’ve been working on with the help of folks like Tom Woodward and Bryan Mathers to think more broadly about user choices, carving out paths, and connecting tools together.

Domain of One’s Own Visuals
the “before” version, which is overdue for a refresh
The Landing Page
  • building on Tom Woodward’s amazing Chooser Plugin / Landing Page that currently lives at landing.stateu.org; it also automatically pulls in the list of used plugins and themes on the site where it’s installed, which would be pretty neat for a new WPMS project as well.
you can see this demo live at landing.stateu.org!

While the landing page can be designed however admins prefer and even framed as a choice between WPMS and DoOO, you could still opt to push new signups to a default starting point. In that case, the above “landing page” would actually live on the WPMS directly, integrate with SSO, and be able to reflect what plugins/themes are in use like the demo above. An example domain might be sites.school.edu for the homepage and sites.school.edu/user for end-user sites.

If users decide they want more flexibility in cPanel, they would click a menu link that takes them to a homepage for DoOO like domains.school.edu. This space has its own SSO integration and signup workflow, so users can create or request accounts depending on admin preference.

Community Showcase & Data Dashboard
  • Pulling in Taylor’s awesome work on the Domains Community Showcase site, as well as his Data Dashboard that pulls in last login info for DoOO users:
Demo Community Showcase site available at stateu.org/community
Pulling in Last Login data right into the DoOO dashboard for admins

^This dashboard was shared more thoroughly at the end of the last DoOO 201 workshop, and you can watch the final session called What’s Next for Domain of One’s Own for more info about how it works!

Support Resources
  • considering existing resources like the DoOO Admin landing page and end user support docs – our struggle with these has always been to keep them updated after they’re given to admins during setup.

The admin landing page has worked well as a home base for new schools because it’s simple and to the point. But how is this WP install managed or updated long term? Do admins still find this space useful 2-3 years in? What if the landing page “quick links” were instead pulled into the WP dashboard, similar to Taylor’s Data Dashboard work or similarly to what the Ultimate Dashboard plugin does?

End User support docs are currently available on stateu.org/docs

Similarly, I’d love to keep thinking about the future of end-user support docs. As mentioned above, this project gets complicated quickly because it becomes quite difficult for Reclaim to update each documentation site after they’ve been delivered to an institution. (Especially if the admin makes changes after the fact– we don’t want to overwrite those.) There’s a balance of ownership between what Reclaim can do to help and what admins choose to make available as a support resource, but I’m all for Reclaim providing starting templates where we can.

My latest thinking is that it may make sense for Reclaim to bring these templated guides into our main knowledge base under a new category of our Domain of One’s Own section. From there, new admins have two choices: they can point their users directly to those guides, which would have to be pretty generic to work for all/most setups, or admins could adopt articles for their own knowledge base sites. If and when Reclaim makes changes to one of our article templates, admins are notified by subscribing to the knowledge base section (already possible) and by hearing about it in our monthly newsletter.

Speaking of Notifications…

I also think we’re not far off from really improving how we’re keeping different types of folks notified at Reclaim. In the early days we truly had 1 mailing list for the capital A “Administrator” of a project to get all notifications. Through the years we’ve been able to start separating out billing, support, SSO, and server maintenance notifications. We’ve also added the Roundup mailing list and Reclaim event notifications to the mix as well. It’s not a totally perfect system yet, but Pilot’s newest project setup questionnaire is a testament to how far we’ve come:

The Project Setup Questionnaire is now live at projectsetup.reclaimhosting.com

Pilot killed it with their work to improve how we’re collecting initial information from admins for new server/project setups. How we got by with a .PDF for so long, I’ll never know. :)

]]>
A running list for the Domains Package https://laurenhanks.com/a-running-list-for-the-domains-package/ Thu, 04 May 2023 19:22:37 +0000 https://laurenhanks.com/?p=5788 One thing I’ve been thinking about recently is how schools can successfully run WordPress Multisite, Domain of One’s Own, and Reclaim Cloud Sandbox spaces together in a way that feels integrated and seamless. We’ve always led with the idea that these tools don’t compete with each other, and that actually the opposite is true: by running them in parallel to each other you can offer a little bit of something for everyone. Perhaps even in tiers or layers as described in my Nashville recap post from 2021. But how can we do that while still keeping the digital footprint for landing pages and end user sites as simple and intuitive as possible? I last explored this in my blog post called A New Model for Domains: DoOO & WPMS and shared how some schools like Coventry University and Oklahoma University are directing traffic and handling domain structures for landing pages and end user sites (which can feel like half the battle).

I love how some of our DoOO and WPMS schools are controlling growth on these platforms, as well as keeping things sustainable, by pushing all new signups to the WordPress Multisite by default. The WPMS then has a very limited set of plugins and themes that are easy to support and maintain for a large group of users. From there, if an end user wants to install a different theme, or explore a different application entirely, they’re directed to Domain of One’s Own. There’s more freedom here, but it likely involves a request form submission or a conversation with an admin before a cPanel account is granted. What’s ultimately happening now is that there are two paths for a user to take. And especially if we’re looking to add a third (Reclaim Cloud for next generation apps or sites that need more resources) it’s important for Reclaim to assist schools with correctly carving out these paths and creating very clear entry points.

This concept has come up in so many different conversations ranging from the visuals and metaphors we use to explain different topics, to how we’re articulating it in support scenarios, to how we’re providing more data for admins to make decisions, to how we’re pulling in these tools to help users choose the path that makes the most sense for them. We’ve been working on a few side projects to help with these scenarios, and now it feels like the right time to compile everything together.

When a new school comes to Reclaim to set up DoOO, WPMS, and the Cloud, I want them to have a cohesive menu of things that they can select or add to their setup to make it work to their preference. I’ve alluded to this with support articles like Domain of One’s Own Setup Features, which covers different signup workflows and cPanel customizations available for DoOO so a new admin can go through and decide what they’ll need. Even still, this article doesn’t quite capture everything that’s available in DoOO anymore, and it definitely doesn’t pull in WPMS & Reclaim Cloud. Where this “menu” lives or how it’s delivered is still a question mark (maybe as simple as adding in a few more guides) but for the purposes of this post I want to share a running list of some of the other projects we’ve been working on with the help of folks like Tom Woodward and Bryan Mathers to think more broadly about user choices, carving out paths, and connecting tools together.

Domain of One’s Own Visuals
the “before” version, which is overdue for a refresh
The Landing Page
  • building on Tom Woodward’s amazing Chooser Plugin / Landing Page that currently lives at landing.stateu.org; it also automatically pulls in the list of used plugins and themes on the site where it’s installed, which would be pretty neat for a new WPMS project as well.
you can see this demo live at landing.stateu.org!

While the landing page can be designed however admins prefer and even framed as a choice between WPMS and DoOO, you could still opt to push new signups to a default starting point. In that case, the above “landing page” would actually live on the WPMS directly, integrate with SSO, and be able to reflect what plugins/themes are in use like the demo above. An example domain might be sites.school.edu for the homepage and sites.school.edu/user for end-user sites.

If users decide they want more flexibility in cPanel, they would click a menu link that takes them to a homepage for DoOO like domains.school.edu. This space has its own SSO integration and signup workflow, so users can create or request accounts depending on admin preference.

Community Showcase & Data Dashboard
  • Pulling in Taylor’s awesome work on the Domains Community Showcase site, as well as his Data Dashboard that pulls in last login info for DoOO users:
Demo Community Showcase site available at stateu.org/community
Pulling in Last Login data right into the DoOO dashboard for admins

^This dashboard was shared more thoroughly at the end of the last DoOO 201 workshop, and you can watch the final session called What’s Next for Domain of One’s Own for more info about how it works!

Support Resources
  • considering existing resources like the DoOO Admin landing page and end user support docs – our struggle with these has always been to keep them updated after they’re given to admins during setup.

The admin landing page has worked well as a home base for new schools because it’s simple and to the point. But how is this WP install managed or updated long term? Do admins still find this space useful 2-3 years in? What if the landing page “quick links” were instead pulled into the WP dashboard, similar to Taylor’s Data Dashboard work or similarly to what the Ultimate Dashboard plugin does?

End User support docs are currently available on stateu.org/docs

Similarly, I’d love to keep thinking about the future of end-user support docs. As mentioned above, this project gets complicated quickly because it becomes quite difficult for Reclaim to update each documentation site after they’ve been delivered to an institution. (Especially if the admin makes changes after the fact– we don’t want to overwrite those.) There’s a balance of ownership between what Reclaim can do to help and what admins choose to make available as a support resource, but I’m all for Reclaim providing starting templates where we can.

My latest thinking is that it may make sense for Reclaim to bring these templated guides into our main knowledge base under a new category of our Domain of One’s Own section. From there, new admins have two choices: they can point their users directly to those guides, which would have to be pretty generic to work for all/most setups, or admins could adopt articles for their own knowledge base sites. If and when Reclaim makes changes to one of our article templates, admins are notified by subscribing to the knowledge base section (already possible) and by hearing about it in our monthly newsletter.

Speaking of Notifications…

I also think we’re not far off from really improving how we’re keeping different types of folks notified at Reclaim. In the early days we truly had 1 mailing list for the capital A “Administrator” of a project to get all notifications. Through the years we’ve been able to start separating out billing, support, SSO, and server maintenance notifications. We’ve also added the Roundup mailing list and Reclaim event notifications to the mix as well. It’s not a totally perfect system yet, but Pilot’s newest project setup questionnaire is a testament to how far we’ve come:

The Project Setup Questionnaire is now live at projectsetup.reclaimhosting.com

Pilot killed it with their work to improve how we’re collecting initial information from admins for new server/project setups. How we got by with a .PDF for so long, I’ll never know. :)

]]>
Costs in the Cloud Two Years On https://bavatuesdays.com/cost-in-the-cloud-two-years-on/ Fri, 30 Dec 2022 18:45:01 +0000 https://bavatuesdays.com/?p=27575 Continue reading ]]> Back in July of 2020, and again in September of 2020, I wrote about tracking costs in Reclaim Cloud.

Costs in the Cloud

A Follow-up on Costs in the Cloud

Two years later things have developed a fair amount with my personal use of Reclaim Cloud, and I think it’s worth sharing because as I move more and more of my web properties to the cloud, the costs have changed quite a bit from the $87.96 I was spending monthly in September of 2020. In fact, I’m now spending close to $400 per month for my Reclaim Cloud environments alone, not to mention my Cloudflare bill and my terrible domain habit 🙂 Anyway, below is the billing breakdown by environment for the first 30 days of December 2022:

Screenshot of Reclaim Cloud billing window

December 2022 costs by environment for my personal Reclaim Cloud

I went from 12 environments in 2020 to 21 environments in 2022. My monthly bill went from $87.96 in September of 2020 to $397.45 for the first 30 days of December. That is a significant jump, and it’s worth looking at how those costs break down. I have roughly two main groups of servers I manage. The first is the bava fleet of servers, which includes the bava.blog (in stereo), bava.tv (PeerTube), bavaghost (Ghost), bavacast (Owncast), bavameet (Jitsi), bavasocial (Mastodon), and two WordPress backup environments with three days of on-demand backups of the blog across two different regions. Let’s look at the numbers:

  • bavacast                $8.51 (Owncast instance I should probably keep off unless in use)
  • bavacast-clone    $0.02 (temp environment while moving bavacast to 1-click installer)
  • bavaghost             $19.61 (Instance of Ghost I am running alongside my blog)
  • bavameet             $2.93 (Jitsi instance I only turn on when needed)
  • bavamulti-1         $24.52 (WordPress Multiregion primary site for the bava.blog)
  • bavamulti-2         $24.49 (WordPress Multiregion secondary site for the bava.blog)
  • bavasocial            $25.18 (Test Mastodon instance for bava.blog I’m still playing with)
  • bavatube               $33.81 (PeerTube instance that hosts over 1000 videos)
  • bavatube-clone   $14.21 (Clone of PeerTube to test version 5 upgrade)
  • env-7614939       $5.21 (Regular nightly backups of bava.blog on UK environment)
  • env-9758267       $1.12 (Regular nightly backups of bava.blog on WC environment)

This comes to a subtotal of $159.61. If I was a bit more frugal I could probably eliminate $60-$70 by turning bavacast off more regularly, getting rid of the bavatube-clone, abandon the multiregion setup, and deciding not to run a separate mastodon instance for bavatuesdays. By eliminating those environments I can easily keep the personal hosting bill for the bava properties well below $100 per month. Not necessarily cheap by shared hosting standards, but the difference is what you can do. I’m running a pretty ridiculous multiregion setup that is overkill for my blog, but a testing ground for what is possible for sites that need to stay up no matter what. This is that space where my personal sites overlap with my professional research and development, so being able to spend the money to be able to run a lot of these next-generation apps that would never run in cPanel is crucial.

Also, I think the elephant in the room is owning your own media and managing it comes at a steep cost. Using YouTube or wordpress.com or Zoom or Twitch would be arguably cheaper, but not only would I feel dirty, but the cost is their terms and my data. Freedom ain’t free! So it is worth it for me to have the route around those options when possible to explore alternatives at a manageable cost.

The other properties I have been consolidating in Reclaim Cloud are the various ds106 sites. Namely ds106.us, the WordPress multisite that was a beast on shared hosting, but purrs like a kitten on Reclaim Cloud. Then there is the mighty ds106radio running on Azuracast in the cloud; the newly christened Mastodon server social.ds106.us; the old gold tilde space ds106.club; and finally the listen.ds106rad.io Apache server I just spun up last month to move it off cPanel. There is also the old ds106.tv environment running an Ant Media server I just shut down. That has been replaced with the ds106.tv server running on PeerTube, and I’m not deleting it until I know everything is synced to the new instance. So, let’s do the math on the monthly costs to host the ds106 empire:

  • ds106.us                    $34.82 (This is an active WordPress Multisite with years of archives, the daily create, assignment bank and much more)
  • ds106radio                $50.54 (Instance of Azuracast running ds106radio)
  • ds106social               $57.98 (Mastodon instance running social.ds106.us)
  • ds106.club                $5.72  (Fun old school tilde server that’s still running purely for the joy it brings me)
  • ds106.tv                    $34.68 (The old Ant Media server we are running, the new Peertube instance is running at $12 per month)
  • listen2ds106radio   $5.71 (an apache server I am testing to run archived HTML sites from)

And that’s the ds106 breakdown at a subtotal of $189.45 per month. You’ll notice that Mastodon and Azuracast are the most expensive environments, but once I retire the Ant Media Server, the cost could go down another $15-$20 monthly. But I might actually increase costs for ds106.us given I want to see if a multiregion setup works for that WordPress setup. Would love to get it running through Cloudflare and have failover, CDN action and DDoS protection, not to mention image compression on the fly. We’ll see, enough folks still use that server that I do not want to get cavalier with it, but it would be nice to treat ds106.us as mission critical infrastructure, even if it’s not in the eyes of many 🙂

So, with $159.61 for the bavaproperties and $189.45 per month for ds106, where is the unaccounted $50? That would be a couple of personal projects outside bavatuesdays I am playing with, namely No Copyright Intended which is a Peertube instance with all my archived VHS tapes, that runs $28.18 per month. Then there is Antonella’s Ghost blog that runs $14.34 per month, and finally a couple of sites I use to track my domains so I remember to renew the growing horde of online addresses I’m regularly accumulating (that’s another post). And with that you have just about $400 per month, sounds crazy, but then think about all the sites it is powering, and how damn well they work 🙂

The other cost, besides domains, is running the DNS for these sites through Cloudflare. I have the Cloudflare Pro Plan at $20 per month (mainly for bavatuesdays, the other domains are free) and I pay an additional $5.00 per month for the Argo smart routing of traffic for the bava. On top of that I pay another $8.80 for the 80 GB of accelerated traffic through Argo monthly (the first GB is free), which acts as a kind of speed boost. Finally, I pay another $15.50 for the load balancing feature through Cloudflare that I use to manage the bava multiregion setup. That breaks down to $5 for basic load balancing, another $10 for geo-routing traffic to the closest server, and $0.50 for DNS queries beyond the initial 500,000. Is it overkill? Definitely. Am I learning a ton about how amazing Cloudflare is and what it enables for sites that need to be constantly online? Absolutely.

So, at the end of the day, excluding domains, I am paying just about $450 for hosting my various bavatuesdays sites and all the ds106 instances. To be clear, I can be so extravagant with these resources cause I help run a hosting service—that fact is not lost on me. At the same time it is helping me put costs in the cloud in some perspective in an attempt to understand what the real costs of hosting your own data are, not to mention running larger community sites. I can afford to do this for ds106.us because it is Dr. Oblivion’s illegitimate love child, and bava is my bread and butter, but not everyone is in the same boat. But the more I put into the cloud the more I start to frame this exchange in terms of the cost of peace of mind and performance for the pain and suffering I went through when trying to run bavatuesdays and ds106.us on a shared server. Perhaps this whole post is more an argument for minimal computing than shedding the weight of the past, but that is not how I roll. I’m a web hoarder and I’m just gonna need to move to Cloud City!

]]>