The time has come to create public-facing websites for Ajabbi so that people can use the Pipi software. There is a 20,000-page backlog of autogenerated and human-written documentation to put somewhere. Here is an experimental structure to test on people to find out what works best. This structure is likely to change a lot based on feedback.
ajabbi.com
A website where people can find general information, sign up and log in.
- Plans (pricing, features)
- Support (docs, training, whitepapers, events)
blog.ajabbi.com
A website for Mike Peters to write about how Pipi was developed.
developer.ajabbi.com
A website where developers can get detailed information about building apps to run on the platform.
docs.ajabbi.com
A website where developers can get technical information about the platform.
foundation.ajabbi.com
A website for the non-profit organisation that will get any net income and then redistribute it.
research.ajabbi.com
A website about experimental research into complex adaptive systems and machine learning computing.
- Machine learning algorithms
workspace.ajabbi.com
A website about the domain applications, including live demos.
The next steps in building these websites include;
- Find some low-cost hosting for thousands of static web pages about documentation. (done)
- Configure email server. (done)
- Register subdomains. (done)
- Draft up skeleton websites. (done)
- Use robots.txt to stop web crawlers, especially Google, from indexing and caching pages likely to disappear. (underway)
- Upload sample documentation to check site-wide navigation and usability. (underway)
- Organise some meetings to get public feedback and suggestions for improvement.
- Repeat the design and test process.
- Build, test, and deploy a DNS engine to automatically create DNS records via the command line or API at a Domain Registrar.
- Build, test, and deploy an FTP engine that automatically uploads webpages to a host as they are created by Pipi using the CMS Engine.
- Render and bulk upload working documentation.
- Edit robots.txt to allow web crawlers, especially Google, to index and cache pages.