ColdFusion Hosting Review

Mikes Notes

Michela Light at TeraTech wrote an article about ColdFusion hosting options. It is copied here.

Resources

How To Choose The Best ColdFusion Hosting Provider

By: Michela Light
TeraTech: 12/08/2024

How do you choose the best ColdFusion hosting provider for your company’s project or app?

In case you need more time to read the entire article, you can download and read it later. Download the whole article in PDF.

Contents

  • How To Choose The Best ColdFusion Hosting Provider
    • Unreliable CF hosting leads to
  • How To Decide Which ColdFusion Hosting is Right For You
    • CF Enterprise VS CF Standard
    • Hosting Security
    • Site Backup
    • Web Hosting Support
    • Scalability
    • Price Point
  • How Do We Pick the Best ColdFusion Hosting Company?
  • In-House ColdFusion Hosting VS Managed Hosting?
    • In-House ColdFusion Hosting
    • Managed ColdFusion Hosting
    • Co-location ColdFusion Hosting
  • ColdFusion is in the Cloud
    • What's new and changed in ColdFusion 2023
  • Adobe ColdFusion Hosting in the Cloud
    • xByte Adobe Coldfusion Cloud Hosting
    • AWS (Amazon Web Services)
    • Digital Ocean
    • GCP (Google Cloud Platform)
    • Illuminated Hosting Services
    • Microsoft Azure
  • Side note on Docker
    • Related: Mark Drew offers an enlightening perspective on how to get started with Docker in our podcast, Getting Started Fast with Docker.
  • Adobe ColdFusion Hosting
    • Coalesce
    • CFDynamics
    • Hostek
    • Media3
    • Newtek Technology Services
    • Vivio Technologies
  • Lucee CFML Hosting
    • Hostek
    • Hostmedia
    • xByte Lucee Cloud Hosting
    • Lightsail
    • Linode
  • Conclusion About ColdFusion Hosting
This is an updated 2023 independent review of the top ColdFusion hosting services.

The wrong choice can create issues that become fatal. Painful mistakes occur when:

  • Unreliable CF hosting leads to
    • servers crashing,
    • costing your company vital traffic,
    • user trust, and
    • potential revenue.
  • Slow CF hosting can
    • reduce usability,
    • cause user dissatisfaction, and
    • impact the adoption of the product.
    • The result could mean the new project fails, and your company’s profits and market share decrease.
    • Slow performance can also impact SEO ranking, sending your site to page two or worse in the search results, which impacts your visibility.
  • Security issues and data-napping remain ever-present dangers with lower-quality hosts.

Let us help you select the best ColdFusion vendor for your situation. This means avoiding the headaches of deciding between so many different CF hosting companies, so let's eliminate our enemy and start our journey!

Note: The companies listed below do not pay or compensate TeraTech. And unlike many other articles on CF hosting, we don’t use affiliate links either. 

We have listed companies alphabetically below to avoid bias from the orders they are given. 

“In all reality hosting companies is not an indicator. With the advent of docker and commandbox ANY hosting company is a CFML hosting company. So in all reality there are more hosting companies for CFML right now than ever since it’s inception.” - Luis Majano

How To Decide Which ColdFusion Hosting is Right For You

Each vendor has different strengths and weaknesses. To identify the best vendor for your specific situation, start by defining your requirements for a ColdFusion hosting service. Define your top priorities:

  • Is speed a priority? 
  • Does scalability matter?
  • Do you need high uptime, stability, and the potential for longer-term growth? 

One big factor: Which variant of ColdFusion are you using? ACF, Lucee, Enterprise Standard…

CF Enterprise VS CF Standard

Which version of Adobe ColdFusion are you running? CF Enterprise is an advanced version of the development platform that provides many more features than CF Standard. You can run multiple CF-developed websites on a single server with both products. However, ColdFusion Enterprise supports multiple server instances, which may be important if you want to separate out different subsites.

Hosting Security

What level of security does your company need? Banking, financial services or e-commerce companies require bulletproof security. A higher level of security will cost more, but the investment may pay off through fewer security breaches. 

Higher security usually comes with tighter controls. Some companies favor a more loose approach to security, so it is easier for them to keep innovating, especially during the development phase. If you plan to tighten security as you get closer to launch, make sure you understand the cost and processes you are committing to.

Site Backup

Running a web app without a backup begs for trouble. Fortunately, most companies appreciate the value of backups. But how do you prefer to do your backup? Some web hosting companies, such as Hostwinds, offer automatic backups as part of their packages. Others require you to make backups yourself. If your team is doing their own backups, do you have processes to ensure those backups are made?

Regardless of how you plan to back up, it’s a best practice always to have multiple backups. An external offline backup is often a safe bet.

And no matter how you back up, make sure you test your backups regularly to ensure they can be restored.

Web Hosting Support

Everyone needs a little help sometimes. Don’t assume you will be able to get the support you need. Some hosts provide real-time help, including phone calls, while others focus on a ticket support system with a 24-hour turnaround.  If your app is mission-critical, be sure to choose a host that offers real-time, all-hands support.

Scalability

Plan for growth. If your app goes viral, you should understand ahead of time what to expect in terms of timeframe and cost. Be sure a hosting service can respond to your company’s growth in traffic and data.

Price Point

Many people consider price their top priority, but we believe in considering it last. Set your “must-have” priorities, then look at the price tag. It’s often easier to adjust your budget than to retool your system and expectations to a cheaper option. 

I would like to show you modern ColdFusion development best practices that reduce stress, inefficiency, and project lifecycle costs while simultaneously increasing project velocity and innovation. For this purpose, I have created a 10- step checklist that you can download (for free)

How Do We Pick the Best ColdFusion Hosting Company?

When I am selecting the best CF host for a particular project, I walk a bit of a tightrope. I try to put myself in a CIO’s position. Making big decisions — often stressed out and sometimes underappreciated. What matters most to you?

The “best” host must fulfill the abovementioned needs while embracing ColdFusion with a dedication that inspires confidence.

To me, peace of mind has as much value as stability or speed. 

Of course, your criteria may be different from mine. Keep reading for additional criteria and an independent list of the major CF hosting companies. 

In-House ColdFusion Hosting VS Managed Hosting?

The next step is to decide where your CF boxes will live.

Determining your basic hosting needs will dictate how you arrange your servers. Does your company need in-house hosting or managed hosting?

And what’s the difference?

  • In-House Hosting means all servers and networking equipment are owned and operated by your company. You are responsible for all maintenance and DevOps. This costly option is in the domain of big players, with millions of users capable of logging in at once. If you’re in that range — lucky you! 
  • Managed Hosting is a dedicated server or cloud server owned by a third-party host and rented out as virtual space. Usually it includes support and backups too.
  • Co-location is a hybrid solution. The servers are yours but they are located at the hosting company’s location. Just as with in-house hosting, you are responsible for all maintenance and DevOps on your servers. 

Here’s a pros and cons list for each to help determine which one is right for you.

In-House ColdFusion Hosting

Pros:

  • Control company-owned and company-maintained servers.
  • Keep sensitive information secured within the company.
  • Internet connection not required for data access.
  • Cost-effective for companies not concerned with uptime.

Cons:

  • Higher initial costs.
  • More personnel and money needed for maintenance, upgrades, and day-to-day operations.
  • Dedicated space needed for servers
  • More vulnerable during natural disasters if no offsite backup is made.

Managed ColdFusion Hosting

Pros:

  • Dedicated server place is no longer required.
  • Cloud hosting is great for smaller companies that may outgrow their infrastructure.
  • Scalable and flexible.
  • Lets you access from any place that has an internet connection.
  • Easy backup is often included.

Cons:

  • Speed limited by internet connection of the host.
  • Third-party hosting company has direct access to your sensitive information.
  • Can’t access data without internet connection.
  • Dedicated solutions can be more difficult.

Co-location ColdFusion Hosting

There is another form of data hosting. Co-location (shared) hosting means storing data on a server at a third-party location. You can access these servers on-site or remotely. Shared hosting offers hybrid In-House and Managed hosting, with a mix of benefits and disadvantages.

Pros:

  • It is less expensive than in-house hosting, making it great for startups.
  • It is convenient, allowing a third party to manage your server hardware while you focus on other things.

Cons:

  • Limited capabilities. Does not scale well.
  • There can be security concerns with the lack of direct control of the boxes.

ColdFusion is in the Cloud

Adobe’s ColdFusion experts and developers promised with CF 2021. They delivered. From multi-cloud abilities, allowing users to choose cloud hosting services at a granular level, to an agnostic stance about which service you use, the newest version of CF promises to have more cloud capability than its predecessors and competitors.

You can either roll your own CF cloud hosting solution( using AWS, DO, Google or Microsoft Azure), or use a CF host cloud option. Let’s look at these options in more detail.

Adobe ColdFusion Hosting in the Cloud

xByte Adobe Coldfusion Cloud Hosting

xByte Hosting offers cloud CF hosting. As you can see from the testimonials, the clients are happy with the efficiency and quality of their team's work.

xByte Hosting CF Hosting Pros:

  • Their support team is available 24/7 via chat, phone, and email
  • Offers a free trial.
  • All major languages available.
  • xByte also specializes in Dedicated Servers, VPS, Hybrid Cloud, Colocation services, and even OnPrem Servers

xByte Hosting CF Hosting Cons:

  • Company is new to the CF space, but team is very experienced in CF
  • New control panel
  • No extremely low-cost shared servers

Cost: You have the option to use a prebuilt plan or schedule a consult to build your own.

Website: https://www.xbytehosting.com

AWS (Amazon Web Services)

According to Amazon: 

AWS is a secure cloud services platform offering computing power, database storage, content delivery, and other functionalities to help businesses scale and grow.

AWS can be very helpful for ColdFusion developers. 

AWS (Amazon Web Services) Pros:

  • AWS has over 5 times the computing power of other leading providers.
  • Over 14 years of experience with hundreds of thousands of customers around the globe.
  • Has the following certifications:
    • HIPAA 
    • SOC 1/SSAE 16/ISAE 3402 (formerly SAS70)
    • SOC 2
    • SOC 3 
    • PCI DSS Level 1 
    • ISO 27001 
    • FedRAMP 
    • DIACAP and FISMA 
    • ITAR 
    • FIPS 140-2
    • CSA 
    • MPAA

AWS (Amazon Web Services) Cons:

  • AWS can have a steep learning curve.
  • By default, AWS does not have Enterprise-grade support. Separate plans must be purchased.

Cost: AWS works as a utilities company. You only pay for what you use. They have a wide array of products with different price points. Beware if you are not careful about usage or hackers gain use of your cloud servers, your AWS bill can go through the roof!

Website: https://aws.amazon.com

Digital Ocean

Digital Ocean Hosting is a cloud computing platform that focuses on simplicity. Both simple to use and clearly priced Digital Ocean can provide for your ColdFusion needs. The community at Digital Ocean recommends running ColdFusion through Apache Tomcat or Apache HTTP Server.

Digital Ocean Pros:

  • Reasonably priced compared to AWS and other cloud competitors.
  • Simple to use and Amazon S3 compatible.
  • Extreme user-friendliness with its U/I.

Digital Ocean Cons:

  • No paid support.
  • Pricing is the same whether you upload data or not.
  • They do not offer as many regions as other services.

Cost: Basic services start at $5 per month and can scale up to $960.

Website: https://www.digitalocean.com

GCP (Google Cloud Platform)

GCP is a cloud platform that boasts using Google’s core infrastructure, data analytics, and machine learning. It is server-less, relying strictly on cloud dynamics. They pride themselves on leading the industry in price and performance.

GCP (Google Cloud Platform) Pros:

  • The fastest I/O among the competition.
  • Strong in storage segmenting and data analytics.
  • Has a one-click browser-based SSH Console.

GCP (Google Cloud Platform) Cons:

  • No control over Virtual Machines.
  • Limited choice of programming languages. (ColdFusion is not pre-configured on GCP; however, a VM can be created on Compute Engine and uploaded.)
  • It is difficult to transition away from Google Cloud Platform.

Cost: GCP offers individual products so you can build the package that best suits your needs. A handy pricing calculator is available on the site to assist in constructing your custom line of services. 

Website: https://cloud.google.com

Illuminated Hosting Services

Illuminated provides both regular and cloud CF hosting. This company excels in users and services. They are well respected and praised on forums, often rated high when it comes to client satisfaction. Advanced Hosting Services feature some of the following technologies: ColdFusion hosting, .NET, NodeJS, PHP, Lucee, and WordPress 1 click install, site.pro, and Python, among some others.

Illuminated Hosting Services Pros:

  • All major languages available.
  • Illuminated also specializes in Dedicated Servers, VPS, Private Cloud, Hybrid Cloud and Colocation services.
  • Good customer service and support.
  • Illuminated Hosting Services
  • ColdFusion 2018 and 2021 is still not available.

Cost: you get an option to build your own plans, but the basic price goes from $8/mo.

Website: https://www.illuminatedhosting.com/

Microsoft Azure

One of the fastest-growing cloud service providers is Microsoft Azure. It contains more cloud server regions than any other service provider, making 90% of Fortune 500 companies trust their data with them.

Microsoft Azure Pros:

  • Adequate flexibility with access to VMs.
  • Fully scalable and compatible with CFML.
  • Great networking technology.

Microsoft Azure Cons:

  • Expensive compared to other providers.
  • Not very compatible with platforms other than Windows.
  • There have been a series of outages in the past.

Cost: Microsoft follows the way of GCP and AWS with a pay-as-you-go system, with a free 12-month introductory period for some services. Depending on your selected products, you build a package best suited for your budget. Microsoft Azure is expensive compared to other providers, which Microsoft tacitly acknowledges by offering a cost-saving primer.

Website: https://azure.microsoft.com/en-us/ 

Side note on Docker

  • Docker is not a CF host. It is a container tool that lets you run a CF server on any cloud hosting provider. 
  • Many CFers doing CF cloud hosting use Docker containers for their virtual CF servers. 
  • Easier spinning up of new developer or test servers (seconds vs. the current days/weeks)
  • Elimination of server configuration errors on spinning up new containers.
  • Option for future easy automated clustering of containers via Docker orchestration software.
  • Docker is the company driving the container movement and the only container platform provider to address every application across the hybrid cloud.

Related: Mark Drew offers an enlightening perspective on how to get started with Docker in our podcast, Getting Started Fast with Docker.

Pros:

  • Since Docker's containers are smaller than VMs, your current servers can support more containers than VMs.
  • Docker has lightning fast boot times.
  • Docker works on all OSes.

Cons: 

  • Containers are much more hacker-friendly. The security is not as prevalent among containers than VMs.
  • Containers are not 100% isolated.
  • Limiting access when dealing with containers is more tricky than with VMs.

Cost: Docker’s basic community edition (CE) is available for free use. However, Enterprise packages are available through consultation and quotes.

Website: https://www.docker.com

Adobe ColdFusion Hosting

Tip: Adobe ColdFusion has offered a list on its website with the noted partners for ColdFusion hosting services. It’s worth listing them here.

Coalesce

Coalesce Solutions works with your company to create a bespoke solution tailored to its business model and ambitions.

Coalesce depends on ColdFusion and AWS to build compliance-centered application server solutions to support their requirements as a PCI DSS (Payment Card Industry Data Security Standard) Level 1 Service Provider, as well as those of their customers operating in regulated environments with requirements such as PCI, HIPAA, FISMA, and FedRAMP.

Pros:

  • Works with Adobe as a preferred vendor to build and offer ColdFusion AMIs through AWS Marketplace, including AMI versions optimized by Coalesce for security and AWS service consumption.
  • Helps re-platform CF to work on AWS.
  • Emphasis on security.

Cons:

  • AWS-centric, with an overall emphasis on migration and compliance.

Website: https://www.coalesceservices.com/migration-acceleration

CFDynamics

CFDynamics provides hosting services for ColdFusion developers and webmasters that need high-speed delivery and a stable development and hosting platform. 

The CF-centric hosting service tries to pack as many logical, useful services as possible into its packages.

Pros:

  • Virtual Private Servers keep your data stored away from prying eyes.
  • Bang for your buck.
  • Money-back guarantee within 30 days of starting.

Cons:

  • The admin panel and UI could be better.

Website: https://cfdynamics.com

Hostek

Hostek is an Adobe ColdFusion hosting partner. They maintain a team of ColdFusion experts on hand. They also offer Fusion Reactor, a ColdFusion Monitoring platform, along with their services. 

Hostek CF Hosting Pros:

  • Their support system offers three different ways to contact them: 
    • Live Chat
    • Online Ticketing System
    • 24/7 Phone Line
  • You get a one-month free trial.
  • There are an array of technical specs with additional add-ons for purchase.

Hostek CF Hosting Cons:

  • Fluctuating performance speeds.
  • Poor uptime.
  • The control panel is not user-friendly.

Cost: In order to get a quote with Hostek, you must first schedule a free consultation with one of their experts who will help build a plan suitable for your company’s needs. 

Website: https://hostek.com

Media3

Media3 ColdFusion hosting service has been around since 1995. 

Pros:

  • They know CF better than most. CF drives its entire support, billing, sales, and control panel functions.
  • Also a ColdFusion Cloud Hosting company.

Cons:

  • Has a history of reliability issues.

Website: https://www.media3.net/adobe.cfm

Newtek Technology Services

Newtek ColdFusion Web Hosting has provided ColdFusion hosting solutions since 1997. The company also provides regulatory-compliant solutions, including HIPAA, PCI DSS, SOX, FISMA, and more. 

Pros:

  • Around-the-clock US-based phone support.
  • Packages built around the size of the business.

Cons:

  • Some history of reliability issues.

Website: https://newtektechnologysolutions.com/coldfusion-2018-hosting

Vivio Technologies

Vivio Web Hosting services is a ColdFusion Hosting partner offering remote IT Server Management. It creates hosting plans individually giving control of IT infrastructure and costs.

Pros:

  • Automatic backups.
  • Bespoke (custom) solutions.
  • Money-back guarantee.

Cons:

  • No free domain included.
  • Higher pricing at entry-level.

Website: https://viviotech.net/coldfusion-hosting/

Lucee CFML Hosting

What about the people using open-source CFML, such as Lucee?

Here's the list of hosting services as well.

Hostek

Hostek is an Adobe ColdFusion hosting partner. They maintain a team of ColdFusion experts on hand. They also offer Fusion Reactor, a ColdFusion Monitoring platform, along with their services. 

Hostek CF Hosting Pros:

  • Their support system offers three different ways to contact them: 
    • Live Chat
    • Online Ticketing System
    • 24/7 Phone Line
  • You get a one-month free trial.
  • There are an array of technical specs with additional add-ons for purchase.

Hostek CF Hosting Cons:

  • Fluctuating performance speeds.
  • Poor uptime.
  • The control panel is not user-friendly.

Cost: In order to get a quote with Hostek, you must first schedule a free consultation with one of their experts who will help build a plan suitable for your company’s needs. 

Website: https://hostek.com/ 

Hostmedia

Pros:

  • Windows Lucee Hosting
  • Linux Lucee Hosting

Cons:

  • Some history of reliability issues.

Website: https://www.hostmedia.co.uk

xByte Lucee Cloud Hosting

xByte Hosting offers cloud CF hosting for Lucee CFML. As you can see from the testimonials, the clients are happy with the efficiency and quality of their team's work.

xByte Hosting CF Hosting Pros:

  • Their support team is available 24/7 via chat, phone, and email
  • Offers a free trial.
  • All major languages available.
  • xByte also specializes in Dedicated Servers, VPS, Hybrid Cloud, Colocation services, and even OnPrem Servers

xByte Hosting CF Hosting Cons:

  • Company is new to the CF space, but team is very experienced in CF
  • New control panel
  • No extremely low-cost shared servers

Cost: You have the option to use a prebuilt plan or schedule a consult to build your own.

Website: https://www.xbytehosting.com/

Lightsail

An option that many CFers like to do is do a Lightsail deployment on AWS and use Lucee.

Pros:

  • It is not pricey.
  • A good option because you have full control of the server.

Cons:

  • A few occasions with failure or bad code deployment, but that is also on you and the quality of your code. 🙂

“Lightsail is just basically a very low-cost VPS with burst CPU capacity and a bunch of proprietary images.” –  Steele Parker, in CF Programmers FB Group.

Website: Installing Lucee on AWS (EC2 or Lightsail) 

BONUS: Here's a guide to set it up with CommandBox, which has a very secure and fast production web server built-in.

Linode

“Linode and Lucee on a VM, perfect! And all in Europe. And, Linode support is one of the best I've encountered. Fast, 2 the point, knowledgeable and sincerely helpful. And even practical in helping. Do it 4 u or do it yourself, you choose 😁.” - Sebastiaan Naafs-van Dijk

Pros:

  • Decent alternative for AWS.
  • Simplify your infrastructure with Linode's cloud computing and hosting solutions and develop, deploy, and scale faster and easier.

Cons:

  • Some problems related to memory consumption, network settings, etc. But, the support and the troubleshooting guides are helpful in resolving this quickly and efficiently.

Website: Onlinebase

Conclusion About ColdFusion Hosting

Congratulations for making it to the end of this post! This is a long, comprehensive, and exhausting list, but the stakes are high for you and your company.

We still think that it is not possible for one service to offer the best configuration for all companies. The best ColdFusion hosting choice for your project depends on many factors that we discussed above.

Remember to keep your company’s goals, infrastructure, and budget in mind when choosing a ColdFusion hosting vendor. Be sure to note your needs and restrictions.

Doing so will allow you to make the proper choice for all your ColdFusion hosting needs.

Are there any other factors you consider when choosing a ColdFusion hosting vendor?

What hosting service do you use and why?

MDN as an example website

Mikes Notes

MDN is a Mozilla website for developers.

Structure

  • References
    • HTML
    • CSS
    • JavaScript
    • HTTP
    • Web API's
    • Web Extensions
    • Web Technology
  • Guides
    • MDN Learning Area
    • HTML
    • CSS
    • JavaScript
    • Accessibility
  • Plus
    • Overview
    • AI Help
    • Updates
    • Documentation
    • Help
  • Curriculum
  • Blog
  • Tools
    • Playground
    • HTTP Observatory
    • AI Help

Resources

Notes







IBM Developer as an example website

Mikes Notes

IBM Developer has an interesting way of organising content on the developer home page.

Categories

  • Build
  • Learn
    • Guided projects
    • Tutorials
    • Articles
  • Explore
  • Code
  • Engage

Resources

Note

Building a ribbon menu

Mikes Notes

I have been building a ribbon menu that can be placed at the top of every webpage at www.ajabbi.com—something similar to MS Office 365, Dreamweaver, and other products.

Requirements

  • Batch generated from a database. (Database done)
  • Limited version for visitors to the public website.
  • A full array of button commands for logged-in users.
  • Users can configure tab menus.
  • Multi-language/writing systems (LTR, RTL, etc.).
  • Adaptable and automatically generated by the CMS.
  • Small, fast and stable.
  • Fit screens of different sizes.

I liked the look and functionality of the open-source MetroUI ribbon created by Olton from Ukraine. 

Options

  • CSS
  • HTMX
  • JS
  • Combo of the above

So far, I have a working version that uses CSS that friends have been testing. Next, I will try HTMX as a testing comparison. HTMX should be able to handle the DOM to generate the live UI interactions.

I want to avoid using a JS single-page application architecture like React; it's unnecessary and complicates future maintenance and technical debt.

Testing

Resources

TIL: 8 versions of UUID and when to use them

Mikes Notes

A recent issue of Architecture Notes had a reference to a blog article about the different types of UUID.

"Nicole dives into the world of UUIDs, explaining the eight different versions and their specific use cases. Here's a quick guide to help you choose the right UUID version for your needs. They also touch on the deprecated and specialized versions, highlighting that v7 should be used over v1 and v6 when possible, and v2 is reserved for specific, often secretive, security uses." - Architecture Notes

Resources

TIL: 8 versions of UUID and when to use them

By Nicole Tietz, Technically a Blog, Saturday, June 29, 2024
About a month ago [1}, I was onboarding a friend into one of my side project codebases and she asked me why I was using a particular type of UUID. I'd heard about this type while working on that project, and it's really neat. So instead of hogging that knowledge for just us, here it is: some good uses for different versions of UUID.

What are the different versions?

Usually when we have multiple numbered versions, the higher numbers are newer and presumed to be better. In contrast, there are 8 UUID versions (v1 through v8) which are different and all defined in the standard.
Here, I'll provide some explanation of what they are at a high level, linking to the specific section of the RFC in case you want more details.
  • UUID Version 1 (v1) is generated from timestamp, monotonic counter, and a MAC address.
  • UUID Version 2 (v2) is reserved for security IDs with no known details [2].
  • UUID Version 3 (v3) is generated from MD5 hashes of some data you provide. The RFC suggests DNS and URLs among the candidates for data.
  • UUID Version 4 (v4) is generated from entirely random data. This is probably what most people think of and run into with UUIDs.
  • UUID Version 5 (v5) is generated from SHA1 hahes of some data you proivde. As with v3, the RFC suggests DNS or URLs as candidates.
  • UUID Version 6 (v6) is generated from timestamp, monotonic counter, and a MAC address. These are the same data as Version 1, but they change the order so that sorting them will sort by creation time.
  • UUID Version 7 (v7) is generated from a timestamp and random data.
  • UUID Version 8 (v8) is entirely custom (besides the required version/variant fields that all versions contain).

When should you use them?

With eight different versions, which should you use? There are a few common use cases that dictate which you should use, and some have been replaced by others.

You'll usually be picking between two of them: v4 or v7. There are also some occasions to pick v5 or v8.

  • Use v4 when you just want a random ID. This is a good default choice.
  • Use v7 if you're using the ID in a context where you want to be able to sort. For example, consider using v7 if you are using UUIDs as database keys.
  • v5 or v8 are used if you have your own data you want in the UUID, but generally, you will know if you need it.

What about the other ones?

Per the RFC, v7 improves on v1 and v6 and should be used over those if possible. So you usually won't want v1 or v6. If you do want one of those, you can use v6.

  • v2 is reserved for unspecified security things. If you are using these, you probably can't tell me or anyone else about it, and you're probably not reading this post to figure out more about them.
  • v3 is superceded by v5, which uses a stronger hash. This one is one where you probably know if you need it.

References

  1. Despite the title of "today I learned," I did learn this over a month ago. In between, that month contained a lot of sickness and low energy, and I'm finally getting back into a cadence of having energy for some extra writing or extra coding.
  2. These were used in a project that either failed or is extremely secretive. I can't find much information about it and the official page's copyright notice was last updated in 2020.

The Developer's Guide to RBAC

Mikes Notes

These two articles about authentication, roles and permissions came from the WorkOS blog.

Resources

The Developer's Guide to RBAC: Part I

July 11, 2024

Authorization often takes a backseat to authentication, but it becomes critical as applications scale and and require finer access control. This blog series covers the transition from basic role-based access control (RBAC) to more advanced fine-grained authorization (FGA), offering practical guidance for engineers implementing these systems.

Authorization is kind of like authentication’s scary cousin – it’s not something most developers worry about day 1, but you know it’s coming. Enterprises – and increasingly, smaller companies too – have rigid permissions requirements. They won’t even consider a product without granular access management. Building a robust, performant authorization system isn’t an if, it’s a when. But it’s complicated, opaque, and a lot of the science isn’t quite settled.

Like our guide to authentication, this series will walk through what developers need to know before implementing authorization, where it gets hard, and a sort of 201 perspective. This first installment will talk about authorization basics, and the second focuses specifically on working with identity providers (IdPs) via SCIM (or otherwise). 

Why does my app need authorization in the first place?

Authentication is a Day 0 problem; you essentially cannot build a useful app that doesn’t have user management in some way, shape, or form. But authorization is more like a Day 5 problem – you don’t run into it until you start selling your product into more serious customers. Most apps start with all users having the same levels of access and permissions, until a deal you’re trying to close says that they need a separation between admin and user roles. At this point, congrats, you need to build authorization. 

There are two camps of philosophy for how to build authorization into your app:

  • Role-based – users are assigned a role, and each role comes with a set of permissions. This is commonly acronym-ized as RBAC, or role-based access control.
  • Resource-based – each user has individual relationships with resources (like a repository in GitHub, or a base in Airtable). This is commonly referred to as fine grained authorization, or FGA. 
At WorkOS we firmly believe that FGA is the most scalable, foolproof way to handle authorization. But it’s not always straightforward to implement it from day 1 (or 5) – instead, it’s useful to think of a company’s timeline from basic authorization all the way towards complex, resource-based FGA. 

Consider the following journey of a team of developers building a completely fictional source code management platform – affectionately named BitHub – where you can host your code in cloud-based repositories. Like auth, which can be as simple or complex as your customers require, authorization starts out pretty basic. 

Stage 1: no authorization

In the first version of your app (and most apps), there is no authorization at all. Every page is accessible by every user. This is a completely reasonable way of modeling reality, and doesn’t become a barrier until the organizations you sell to demand a permissions scheme. In fact, there are some smaller SaaS apps out there that to this day have successfully stayed in Stage 1 of authorization; it all just depends on who your customers are.

For our team of BitHub developers, this would mean that every member of a given organization has full access to every repository. Not ideal, but also not completely untenable for smaller companies.

Stage 2: admins, and everyone else

The most common initial authorization-related request from customers is to add the concept of an admin. In its simplest form, admins can view certain pages that non-admins cannot. 

The difference between having no authorization and having the “admin” concept is not massive. The easiest way to implement it is by adding an is_admin column into your users table, and then adding a check to the pages that you want to gate to admins only. As long as the only difference between these two roles is viewing an entire page or not, the logic in code remains relatively simple.

Though it should be common sense, a surprising number of companies forget to show anything more than a 500 error when an un-authorized, non-admin user tries to access a page they don’t have access to. Take the time to communicate to your visitor why they can’t see the page!

Stage 3: n>2 roles

BitHub is taking off and getting in front of larger customers with more nuanced team structures. The time has come to move beyond is_admin and add other roles like repository owner, contributor, etc. Congrats, you have now entered the realm of permissions. A permission just means that a user is able to do a specific thing: it could be as simple as viewing a page, or as complex as editing a specific row in an Airtable base. Implementing permissions well is quite complicated and will require a completely new data model (more on this later).

Stage 4: the great beyond

The more customers you acquire, the more complex their needs are and the more adjustments you need to keep making to your authorization data model. Your largest customer has 17 different types of software engineers and all need custom configurations of repo permissions. Your product surface area grows, and the number of “things” a user could conceivably need permission for grows exponentially. Instead of the rigidity of roles and permissions, what you really need is the ability to have each user define a different relationship with each resource (in our case, repo). At this point, doing role-based authorization just doesn’t make sense anymore.

So in summary: if your B2B SaaS app is successful enough, you will inevitably eventually end up needing fine grained, resource-based authorization. With all of that in mind, let’s take a deeper look at both RBAC and FGA, how you might go about implementing each, and some of the 201 problems developers run into. 

Role-based authorization: basics and not-so-basics

Role-based authorization is built on two major components:

  1. Each user is assigned a role, like admin or viewer. There are usually anywhere from 2-10 roles in your typical RBAC-using SaaS product.
  2. Each role has a set of permissions, or things that users with that role can and can’t do. Permissions can be as simple as the ability to view an entire page, or as complex as editing a specific row in a table.
A basic data model for an RBAC setup where each user can only have one role might look like this:
[IMG]

If you want to allow for more than one role, you’d need a separate roles mapping table that might look like this:
[IMG]

For each part of your application that you’d want to restrict to specific roles, you need to add what’s called a “check” – some code that makes sure that the currently authenticated user has a role that allows them to access it. 

All of this is 101. But when you get into the details of how you’d actually implement a lot of this stuff, teams vary pretty widely in how they implement things. Broadly speaking, there are two philosophies on how to handle both data storage and your permissions logic: centralized and decentralized.

Data storage: centralized vs. decentralized

The data about which roles each user has is obviously stored in your production database somewhere. But how does your application actually access it?

In decentralized systems, role information gets stored in whatever object you’re using for session management and authentication. If you’re using JWTs, you might store a user’s role (and in some cases, what permissions that role has) in the JWT itself. As long as the token is active, it’s super easy and fast to have your application logic check against it before allowing a user to take an action that you might want to be restricted. 

In centralized systems, you create some sort of service that queries your database every time you want to do a check. This is how Google Zanzibar works.

It’s obviously a lot more work and complexity to build a centralized service, which is why most teams start with a decentralized implementation. But there are a bunch of downsides to storing role information in a session token:

It’s already notoriously difficult to invalidate a JWT, so your system will not be real time when role changes are made.
If your stack is more complex and has several services, you need to recreate the ingestion logic for each service.
There’s a pretty hard limit on how much data you can actually store in these tokens.
On that last point, it’s worth reading Carta’s post on how they built a system based on Zanzibar. They started with decentralized, JWT-based authorization, but over time found that tokens were getting as big as 1MB (!) and taking a prohibitively long time to build. 

Permissions logic: centralized vs. decentralized

The logic that handles your checks (is the currently authenticated user allowed to do this?) can also be centralized or decentralized. 

In a decentralized implementation, checks are distributed across whatever part of your application they relate to. If BitHub has an endpoint for creating a new repository, that endpoint’s code would have a check to make sure the currently authenticated user has the right permission to be able to create a repository. As discussed above, the actual role or permission information might be stored in a session object, or it might require a database query.

In a centralized implementation, you have a separate service or module that does all of your authorization checks. You either import it or call it from whichever part of your application you want to restrict access to.

Decentralized checks are obviously much simpler and straightforward to implement, but quickly become hard to manage (multiple code owners, yikes). So most teams usually start decentralized and then centralize things when the check sprawl becomes too burdensome.

Role Explosion

A very real problem that teams run into when using an RBAC system is called role explosion. At some point, you have too many customers with conflicting role requirements and it starts to degrade your system.

Back to our BitHub example: when we built our V1 of authorization, we started with some basic roles: admin and viewer. Great. But as we continue to add new customers, a few here and there want adjustments. One organization asks to add a “creator” role, so users can create repositories but not have admin rights. Easy enough. But then another organization asks for a “moderator” role, a second asks for a “team lead” role, and a third has an unusual setup for their repos and needs a custom role that allows team members to manage only certain repositories. And so on and so forth…

The basic idea is that if you’re running multi-tenant SaaS, every time a customer asks for a specific new type of role, you’re faced with a choice:

Implement the role as a sort of “override” just for that organization, which means you need to bifurcate your data model (bad), or
Denormalize all of your data, have custom roles for each organization, and voila, you’ve got role explosion.
Once this gets hairy enough, many teams opt to give their customers the ability to create their own custom roles. The data model for this is essentially one giant roles table that needs to link out to some sort of permissions table:

And then each organization’s rows can only be edited by that organization. By the time you have 1000 customers, this table already has 1M rows and starts to slow down all of your authorization checks. This is exactly the situation from our earlier story: once your RBAC setup becomes sufficiently complex, you’re basically building FGA. Speaking of which…

Resource-based authorization: FGA

Resource-based authorization is, well, resource based – instead of creating the abstraction of a role (with associated permissions), users relate directly to resources or objects. In our BitHub example, an FGA approach would look at individual relationships between users and repositories, while RBAC would focus on what general permissions a user with a type of role would have. 

At WorkOS we firmly believe that FGA is the most scalable, foolproof way to handle authorization. On a long enough timeline, every RBAC setup becomes too complex – especially as more and more apps become collaborative and host some kind of user generated content. 

So what does FGA actually look like? There are two main schools of philosophy.

Policy languages, like OPA

Policy languages are like DSLs (sort of) for specifying how users get access to resources. They’re kind of like a single interface for authorization checks (Carta’s wording). A popular open source implementation is Open Policy Agent, or OPA for short. A sample snippet from their docs, modified to our BitHub story, shows how you’d restrict deleting a repository to only users who have ownership over that repository:

package application.authz
import future.keywords
default allow := false
allow if {
input.method == "DELETE"
some repo_id
input.path = ["repos", repo_id]
input.user == input.owner
}
The thing about policy languages is that they don’t deal with the actual storage of your user and resource data – they just act as an interface between it and your app. Carta found that OPA didn’t work for their complex permission hierarchy:

This worked fine for simple permissions, but complex ones caused major issues for us. Complex permissions often queried several data models and added hundreds of milliseconds to response times.
For a more homegrown implementation story, check out this blog post by Figma engineering.

Full systems, like Zanzibar

In 2019, Google released a paper detailing how they built their internal authorization system, called Zanzibar. They didn’t include an official open source implementation, and since then we’ve seen several startups try and attack this problem with their own implementations. 

Zanzibar is based on tuples that represent relationships between users and objects:

(user, object, relationship)

(user_id, repository_id, owner)
(user_id, repository_id, creator)
The simplicity of these relationships solves the common role explosion problems you see with complex RBAC: every user’s relationship with an object is individual. Some people also call this ReBAC (relationship-based access control), which is not confusing at all!

You could conceivably implement a naive version of this in a database table:
But that table would get very large, very quickly, and would only work if you have an index on user_id, etc. So a lot of the magic in the Zanzibar paper is the system they built to implement it, which includes compute, storage, indices, regular running jobs, and more.

The last thing to note about Zanzibar is that it’s a storage system, but it won’t make your decisions for you – in that sense, it’s kind of the opposite of something like OPA. Zanzibar is basically really good at telling you what a user’s relationship with a given object is, really fast; the rest is up to your application logic.

If this all wasn’t already complicated enough, we left out one major detail: most of your enterprise customers will be running their identity through an IdP like Okta. How authorization works with IdPs and protocols like SCIM will be the subject of the second half of this series.
x

The Developer’s Guide to RBAC and IdPs: Part II

July 18, 2024

When building authorization for enterprise customers, supporting IdP role mapping is a challenging yet important task. This allows organizations to manage their roles and permissions through a single source of truth, the IdP, rather than dealing with unique permissions schemes for each SaaS tool.

So you’ve read Part I of our RBAC series, gotten a basic handle on RBAC and FGA, and think you’re ready to build enterprise-ready authorization into your app. Think again my friend! Most enterprises (and increasingly, smaller companies too) use an Identity Provider (IdP) like Okta to manage their internal user data. To support their needs, your authorization system will need to sync with and pull data from these platforms; and their APIs can be arcane, disorganized, and full of edge cases. This post will walk through how IdP integrations work with authorization and things to watch out for when building your own.

The basic concept of syncing with an IdP
The easiest way to think about the difference between regular authorization and IdP-based authorization is to consider the source:

For run of the mill authorization systems, the data source for roles, resources, and permissions are generated inside your app by your users
For IdP-based authorization systems, the data source for roles, resources, and permissions is an external data store (Okta, Azure AD, etc.)
Enterprises want to manage their organization’s roles and permissions from a single place instead of having to deal with tens (or hundreds) of different SaaS tools and their unique permissions schemes. So they designate different roles and groups in an IdP like Okta for each employee. An engineer might be in the engineering group, while a team lead for engineering might be in both the engineering group and the admin group. 


Here’s where you come in: your app needs to be able to pull that user data from Okta and then map it to the relevant roles (or resources, if you’re on the FGA track) that exist in your scheme. For example, you might want everyone in the engineering group at a customer of yours to have view-only access to resources in your product, while admins have write access. We’ll talk more about this mapping layer later, since you will likely need to build a custom UI for it down the road. 

IdP syncing and SCIM: push, not pull

IdP syncing does not work the way you’d expect it to. Instead of publishing an API that you can poll on a regular basis, most IdPs actually push data to you when they want (and sometimes the timing can be wonky). Putting yourself in the shoes of an IT admin for a brief moment, the process to integrate a new application (yours) looks something like this:

You create a new application in Okta and name it BitHub

You assign the relevant users who should have access to BitHub (either directly, or using groups)
 This can be very un-fun for IT admins. Because if you’re a huge company, but only a few people need access to a new app, you either need to assign them directly or create an entire new group just for this app called something like “BitHub users.” 
You get a unique URL and key from BitHub and give it to Okta. This is how Okta knows where to send data
Okta then starts publishing information to BitHub. There’s usually an initial sync, and then subsequent updates when things change on Okta’s end (a group update, a last name change, etc.).  You’re at the mercy of the IdP now and when they decide to publish data: you cannot just query an endpoint and get the information you need as you please. Because of this, you also need to store all of this data on your own immediately once you get it, so your app can have persistent permissions that don’t rely on a third party for every check. 

The most ubiquitous protocol for handling this sync – and the culprit for why the data sync flow is so odd – is called the System for Cross-Domain Identity Management, or SCIM for short. It’s a specification for a hierarchy of users and groups, and (for better or worse) is the standard for how IdPs like Okta communicate group information to apps. An example of a SCIM object might look like this (from their docs):

{  "schemas": ["urn:ietf:params:scim:schemas:core:2.0:User"],  "id":"2819c223-7f76-453a-919d-413861904646",  "externalId":"dschrute",  "meta":{    "resourceType": "User",    "created":"2011-08-01T18:29:49.793Z",    "lastModified":"2011-08-01T18:29:49.793Z",    "location":"https://example.com/v2/Users/2819c223...",    "version":"W\/\"f250dd84f0671c3\""  },  "name":{    "formatted": "Mr. Dwight K Schrute, III",    "familyName": "Schrute",    "givenName": "Dwight",    "middleName": "Kurt",    "honorificPrefix": "Mr.",    "honorificSuffix": "III"  },  "userName":"dschrute",  "phoneNumbers":[    {      "value":"555-555-8377",      "type":"work"    }  ],  "emails":[    {      "value":"dschrute@example.com",      "type":"work",      "primary": true    }  ]}
Read more in our guide to SCIM and directory sync.

Another fun thing is that every IdP interprets SCIM slightly differently. A good example is group changes during deactivation. Imagine someone on your team goes on parental leave, and so IT deactivates their accounts temporarily. While they’re on leave, someone changes their group from engineering to management.

Okta doesn’t tell your app anything: because the user is deactivated, the change isn’t relevant. When the user gets reactivated, they just tell you that they’re now in the management group.
Azure does tell your app that the user was removed from the engineering group once they’re reactivated.
When your teammate gets back, they’re now in a different group – but for whatever reason, Okta still doesn’t send any notification to your app that this user is no longer a part of engineering, just that they’re reactivated and now in management. This is a clear security flaw, because now this user will still have access to engineering resources that they shouldn’t have access to. 

What all of this inevitably means is that teams end up bifurcating their application logic for different IdPs: your code will need to do something like “if they’re using Okta, do this, if they’re using Azure, do something else.”

And a final wrench: SCIM is not the only way to handle authorization syncs. We’ve written previously about SAML, the standard protocol for handling SSO. SAML is the protocol that tells applications who you are and why you should have access to a particular tool; but some companies will actually use it for authorization too. Stripe, for example, doesn’t support SCIM and instead requires enterprises to embed role and group information in SAML responses. But there’s a major vulnerability here too: you only get updates to group information when a user authenticates.

The TL;DR on all of this: if you look closely enough you can see apps out there that implement all possible permutations of SCIM, groups, attributes, and SAML. It’s kind of the wild west because of how difficult it is to build all of this stuff. Supporting just one IdP across all of these modalities is likely to be at least a month of engineering time to do it well. 

Creating a mapping layer to your permissions

A lot of the complexity in building IdP-based permissions is in mapping the information in the IdP to your unique set of permissions. There’s no way to do this automatically, you’ll need to build a UI that allows IT admins to manually map their IdP’s groups to your app’s permissions and roles (or resources). It might look something like this:

Show all the groups coming in from the IdP on the left side
Show all of the available roles or resources on the right side
Allow the admin to connect the two
Back to our BitHub example, our customer’s IdP might tell us that a user is in an engineering group, and another user is in an admin group. What does this mean for our app though, which has a creator, viewer, and admin role? We’d need to build a UI that allows an IT admin to say that everyone in the engineering group should be a default viewer, everyone in the admin group should be a default admin, etc. 

It’s worth noting that in some cases, companies will try to avoid this by requiring IT admins to the lengths of creating custom roles in the IdP for your app’s roles. This would mean attaching an attribute to every user in the IdP that says something like “BitHub – admin” or “BitHub – viewer.” And in this scenario, you don’t need to build a mapping layer at all. This is how Loom’s SCIM integration works. 

In some (rare) cases, IT admins will prefer this attribute-based approach. And then you’ll need to bifurcate your application logic yet again to handle both customers that want a mapping layer and customers who don’t. But this attribute-based approach is extremely brittle: if BitHub ever wants to make changes to their roles or permissions, then the customer’s IT admin needs to go make the according changes on their end (which is almost impossible). This is exactly why building for SCIM is so fragmented. 

FGA and IdP roles

We’ve been awfully quiet about FGA up until this point: that’s because the fundamental concept of resource-based authorization doesn’t really work well with IdP-based authorization at all. FGA is dynamic: I just created a new Figma file, I just built a new Hubspot contact list, and here’s a project ID. There’s no way an IT admin would be able to interpret all of these and apply the appropriate groups and roles. 

For example, BitHub might want someone to be an owner (delete and access management rights) for repository A, but only a viewer for repository B. But as a whole, in the BitHub app overall, they’re only a viewer. You can create a sort of hybrid system where your general user permissions are synced from the IdP, but you can have resource-level exceptions; this information exists only in your tool, not the IdP. 

There basically are no shared concepts between your app and the IdP in a pure FGA scenario; SCIM exists in a role-based universe. Having said that, we have seen teams try to force this by requiring customers to input actual resource IDs in an IdP so that data can get sent to an app. Imagine each repository in BitHub has a resource ID, and you ask your customer’s IT admin to add (for each user) this resource ID plus a relevant role (creator, viewer) in a string, and then your app consumes this data and sets up the appropriate permissions. Don’t do this! 

We are guessing that things will go towards a hybrid RBAC/FGA direction in the future. A few best practices for setting up the FGA/role architecture on your end to make this as smooth as possible:

1. Design your global, static roles around IdP needs

IT admins will care the most about high level org roles like admin, and how well they can automate syncing those from the IdP to your app. Make sure those sync well independent of lower level application roles (like viewer, contributor, etc.).

2. Design your resource level roles around application needs 

IT admins are much more concerned with high level org roles than resource level stuff like whether someone is a viewer or contributor on a specific repository. The persona you’re designing for here is actually the application user, not the IT admin – it’s OK to not expect these kinds of roles to sync well from the IdP.

3. Build an intuitive hierarchy between (1) and (2)

Global, org-level roles like admin should automatically have default resource level roles and permissions. For example, an “admin” at the org level (which you sync through an IdP) might be a default “owner” of every repository, while “member” at the org level is a default “viewer” of all repositories.

GraphQL vs REST

Mikes Notes

I am figuring out how to have REST and GraphQL API endpoints available for integration.

  • What is the best way to document them?
  • How are the endpoints structured?
  • What open-source front-facing API engines could be used?

Resources

ShareThis vs AddToAny

Mikes Notes

The Ajabbi website needs a simple way for people to print, email, or share any web page on social media. Tracking is not required, as it is against Ajabbi's privacy policy.

ShareThis is one option, and it's free. 

I have some questions.

  • Is it safe?
  • Why is it free?
  • Are there any privacy issues?
  • Can any tracking be turned off?
  • Is it reliable?
  • Is it WAIG accessible?
  • How does it compare with other social bookmarking websites?

I concluded that ShareThis was breaching users' privacy. I am now using AddToAny, which anonymises the data collected.

Wikipedia

"A social bookmarking website is a centralized online service that allows users to store and share Internet bookmarks. Such a website typically offers a blend of social and organizational tools, such as annotation, categorization, folksonomy-based tagging, social cataloging and commenting. The website may also interface with other kinds of services, such as citation management software and social networking sites. ..." - Wikipedia

"ShareThis is a technology company headquartered in Palo Alto, CA, with offices in New York, Chicago, and Los Angeles. It offers free website tools and plugins for online content creators. ShareThis collects data on user behavior, and provides this to advertisers and technology companies for ad targeting, analytics, and customer acquisition purposes. ShareThis has an exclusive license with the University of Illinois for patent applications made by co-founder David E. Goldberg. The patents include genetic algorithms and machine learning technologies used for the purposes of information collection and discovery based on a user's sharing behavior. ..." - Wikipedia

Resources

ShareThis Instructions

How to Reinitialize ShareThis Buttons With Specific Sharing Parameters
In this guide, we’ll teach you how to reinitialize (reload) our ShareThis buttons to use specific sharing parameters. By default the ShareThis widget loader loads as soon as the browser encounters the JavaScript tag; typically in the tag of your page. ShareThis assets are generally loaded from a CDN closest to the user. However, if you wish to change the default setting so that the widget loads after your web page has completed loading then you simply set a parameter in the page.

Reinitializing the buttons would allow you to:

  • Take control of when to display the buttons, for example, until a modal or pop-up opens up.
  • Have different instances of the buttons on the same page with different configurations, for example, if you want to display only the Twitter button on a specific part and the Facebook one on another. Or if you want to have different languages on different sets of buttons.
  • Auto refresh share button properties when new links are loaded with share buttons (infinite scroll).
Note: If you don’t want to reinitialize the buttons with specific parameters, you could just use the window.__sharethis__.initialize() function as it is whenever your modal, pop-up, etc. activates. Please note that you may have to set a delay of around 0.3 to 1 second before adding the line of code above to give time for the container to appear, otherwise, the function will be called too soon.

Add <div> and Javascript code

// render the html
// load the buttons window.__sharethis__.load('inline-share-buttons', {/* this is where your configurations must be, read the Configuration section */

Once you’ve added the above portion of the code, you’re now able to include any or all of the following configuration options below.

Configuration Options

config = { 
   alignment: STRING, // left, right, center, justified.
   container: STRING, // id of the dom element to load the buttons into
   enabled: BOOLEAN,
   font_size: INTEGER, // small = 11, medium = 12, large = 16.
   id: STRING, // load the javascript into a specific dom element by id attribute
   labels: STRING, // "cta", "counts", or "none"
   language: STRING   // IETF language tag in which the buttons' labels are,
   min_count: INTEGER, // minimum amount of shares before showing the count
   padding: INTEGER, // small = 8, medium = 10, large = 12.
   radius: INTEGER, // in pixels
   networks: ARRAY[STRING],
   show_total: BOOLEAN,
   show_mobile_buttons: BOOLEAN, // forces sms to show on desktop
   use_native_counts: BOOLEAN, // uses native facebook counts from the open graph api
   size: INTEGER, // small = 32, medium = 40, large = 48.
   spacing: INTEGER, // spacing = 8, no spacing = 0.
};
  

Example

// render the html
// load the buttons window.__sharethis__.load('inline-share-buttons', { alignment: 'left', id: 'my-inline-buttons', enabled: true, font_size: 11, padding: 8, radius: 0, networks: ['messenger', 'twitter', 'pinterest', 'sharethis', 'sms', 'wechat'], size: 32, show_mobile_buttons: true, spacing: 0, url: "https://www.sharethis.com", // custom url title: "My Custom Title", language: "en", image: "https://18955-presscdn-pagely.netdna-ssl.com/wp-content/uploads/2016/12/ShareThisLogo2x.png", // useful for pinterest sharing buttons description: "My Custom Description", username: "ShareThis" // custom @username for twitter sharing });

Available Networks

Social Service data-network Code
Black Lives Matter blm
Blogger blogger
Buffer buffer
Copy Link copy
Diaspora diaspora
Digg digg
Douban douban
Email email
Evernote evernote
Facebook facebook
Flipboard flipboard
Gmail gmail
Google Bookmarks googlebookmarks
Hacker News hackernews
Instapaper instapaper
iOrbix iorbix
Kakao kakao
Koo App kooapp
Line line
Linkedin linkedin
LiveJournal livejournal
Mail.Ru mailru
Meneame meneame
Messenger messenger
Odnoklassniki odnoklassniki
Outlook outlook
Pinterest pinterest
Pocket getpocket
Print print
Push to Kindle kindleit
Qzone qzone
Reddit reddit
Refind refind
Renren renren
Skype skype
Surfingbird surfingbird
Telegram telegram
Tencent QQ tencentqq
Threema threema
Trello trello
Tumblr tumblr
Twitter twitter
Viber viber
VK vk
WeChat wechat
ShareThis sharethis
Sina Weibo weibo
SMS sms
Snapchat snapchat
WhatsApp whatsapp
WordPress wordpress
Xing xing
Yahoo Mail yahoomail
Yummly yummly

Lazy loading and ShareThis tools

The ShareThis tools load/display only the first time the site loads. In case you are using tools like the Image Share Buttons or Video Share Buttons and your site uses lazy loading or similar technologies you will need to reinitialize the tools once newer elements appear.

Since ShareThis searches for the images/embedded videos on that occasion only, and the images closer to the bottom aren’t loaded yet, ShareThis doesn’t know of their existence even if they do load later on, and won’t display the buttons on them.

As a workaround for this, the Javascript code below will check every 3 seconds if any scrolling is done, if so, it will reinitialize the buttons. We are using the scrolling as a way to know if the images have loaded; since the images load once a visitor scrolls to that specific part.

//state variable for scrolling
let scrolling = false;

//in case of scrolling, change the state of the scrolling variable to true
window.onscroll = function() {
scrolling = true;
}
/*create an interval that checks every 3 seconds the state of the scrolling variable, if any scrolling has been done in that interval, reinitialize the buttons*/

setInterval(() => {
if (scrolling) {
scrolling = false;
window.sharethis.initialize()
}
}, 3000);

Notes

Please keep in mind that Open Graph tags will take precedence when sharing on Facebook and other social channels. If linking to a custom URL, please be sure to have Open Graph tags filled out for that page as well.

As with our other tools, we recommend moving the site to live production before giving it a try as there are some resources that aren’t passed during a local/test environment.

Order of Precedence

It is important to remember the order of precedence by which the ShareThis code processes share properties. Generally, we recommend using one approach by which sharing properties are specified on your pages to prevent errors.

  • Any dynamically specified JavaScript properties (i.e. highest precedence)
  • Properties specified in tags (i.e. second precedence)
  • Open Graph Protocol tags (i.e. lowest precedence)