ColdFusion CFEXECUTE

 Mikes Notes

  • I am learning to use <cfexecute> to enable Pipi to use the command line to control its host server autonomously.
  • The server has no public access.
  • How to do this safely?
  • Code examples below in yellow

See notes from

  • Adobe Help
  • CFDocs
  • Ben Nadel (2020)
  • Brian Harvey - Heartland Web Development (2015)

From Adobe Help > cfexecute

Executes a ColdFusion developer-specified process on a server computer.

<cfexecute
name = "application name"
arguments = "command line arguments"
outputFile = "output filename"
errorFile = "filename to store error output"
timeout = "timeout interval"
variable = "variable name"
errorVariable = "variable name">
...
</cfexecute>

From CFDocs > cfexecute

name (string)

Absolute path of the application to execute.


On Windows, you must specify an extension; for example, C:\myapp.exe.

arguments (any)

Command-line variables passed to application. If specified as string, it is processed as follows:

  • Windows: passed to process control subsystem for parsing.
  • UNIX: tokenized into an array of arguments. The default token separator is a space; you can delimit arguments that have embedded spaces with double quotation marks.
If passed as array, it is processed as follows:
  • Windows: elements are concatenated into a string of tokens, separated by spaces. Passed to process control subsystem for parsing.
  • UNIX: elements are copied into an array of exec() arguments

outputfile (string)

File to which to direct program output. If no outputfile or variable attribute is specified, output is displayed on the page from which it was called.

If not an absolute path (starting with a drive letter and a colon, or a forward or  backward slash), it is relative to the CFML temporary directory, which is returned 
by the GetTempDirectory function.

variable (string)

Variable in which to put program output. If no outputfile or variable attribute is specified, output is displayed on page from which it was called.

timeout (numeric)

Default: 0
Length of time, in seconds, that CFML waits for output from the spawned program.

errorVariable (string)

The name of a variable in which to save the error stream output.

errorFile (string)

The pathname of a file in which to save the error stream output. If not an absolute path (starting a with a drive letter and a colon, or a forward or backward slash), it is relative to the ColdFusion temporary directory, which is returned by the GetTempDirectory function.

terminateOnTimeout (boolean

Default: false
Lucee 4.5+ Terminate the process after the specified timeout is reached. Ignored if timeout is not set or is 0.

directory (string)

Lucee 5.3.8+ The working directory in which to execute the command

From Ben Nadel's Blog

Running CFExecute From A Given Working Directory In Lucee CFML 5.2.9.31
5 April 2020

When you invoke the CFExecute tag in ColdFusion, there is no option to execute the given command from a particular working directory. That's why I recently looked at using the ProcessBuilder class to execute commands in ColdFusion. That said, the default community response to anyone who runs into a limitation with the CFExecute tag is generally, "Put your logic in a bash-script and then execute the bash-script.". I don't really know anything about bash scripting (the syntax looks utterly cryptic); so, I thought, it might be fun to try and create a bash script that will proxy arbitrary commands in order to execute them in a given working directory in Lucee CFML 5.2.9.31.

The goal here is to create a bash script that will take N-arguments in which the 1st argument is the working directory from which to evaluate the rest (2...N) of the arguments. So, instead of running something like:

ls -al path/to/things

... we could run something like:

my-proxy path/to ls -al things

In this case, we'd be telling the my-proxy command to execute ls -al things from within the path/to working directory. To hard-code this example as a bash script, we could write something like this:

cd path/to # Change working directory.

ls -al ./things # Run ls command RELATIVE to WORKING DIRECTORY.

The hard-coded version illustrates what we're trying to do; but, we want this concept to by dynamic such that we could run any command from any directory. To this end, I've created the following bash script, execute_from_directory.sh, through much trial and error:

#!/bin/sh

# In the current script invocation, the first argument needs to be the WORKING DIRECTORY
# from whence the rest of the script will be executed.
working_directory=$1

# Now that we have the working directory argument saved, SHIFT IT OFF the arguments list.
# This will leave us with a "$@" array that contains the REST of the arguments.
shift

# Move to the target working directory.
cd "$working_directory"

# Execute the REST of command from within the new working directory.
# --
# NOTE: The $@ is a special array in BASH that contains the input arguments used to
# invoke the current executable.
"$@"

CAUTION: Again, I have no experience with bash script. As such, please take this exploration with a grain of salt - a point of inspiration, not a source of truth!

What this is saying, as best as I think I understand it, is take the first argument as the desired working directory. Then, use the shift command to shift all the other arguments over one (essentially shifting the first item off of the arguments array). Then, change working directory and execute the rest of the arguments from within the new working directory.

Because we are using the special arguments array notation, "$@", instead of hard-coding anything, we should be able to pass-in an arbitrary set of arguments. I think. Again, I have next to no experience here.

Once I created this bash-script, I had to change the permissions to allow for execution:

chmod +x execute_from_directory.sh

To test this from the command-line, outside of ColdFusion, I tried to list out the files in my images directory - path/to/my-cool-images - using a combination of working directories and relative paths:

wwwroot# ./execute_from_directory.sh path/to ls -al ./my-cool-images

total 17740
drwxr-xr-x 11 root root     352 Apr 11 11:28 .
drwxr-xr-x  4 root root     128 Apr 15 10:07 ..
-rw-r--r--  1 root root 1628611 Dec  1 20:04 broad-city-yas-queen.gif
-rw-r--r--  1 root root  188287 Mar  4 14:09 cfml-state-of-the-union.jpg
-rw-------  1 root root 3469447 Jan 28 16:59 dramatic-goose.gif
-rw-------  1 root root 2991674 Dec 14 15:39 engineering-mistakes.gif
-rw-r--r--  1 root root  531285 Dec  1 21:10 monolith-to-microservices.jpg
-rw-r--r--  1 root root  243006 Dec 24 12:34 phoenix-project.jpg
-rw-r--r--  1 root root 1065244 Jan 22 14:41 rob-lowe-literally.gif
-rw-r--r--  1 root root 7482444 Mar 25 10:15 thanos-inifinity-stones.gif
-rw-r--r--  1 root root  239090 Dec 29 13:08 unicorn-project.jpg

As you can see, I was able to execute the ls command from within the path/to working directory! Woot woot!

To test this from ColdFusion, I'm going to recreate my zip storage experiment; but, instead of using the ProcessBuilder class, I'm going to use the CFExecute tag to run the zip command through my execute_from_directory.sh bash script:

<cfscript>
// Reset demo on subsequent executions.
cleanupFile( "./images.zip" );
// ------------------------------------------------------------------------------- //
// ------------------------------------------------------------------------------- //
// Normally, CFExecute has no sense of a "working directory" during execution.
// However, by proxying our command-line execution through a Shell Script (.sh), we
// can CD (change directory) to a given directory and then dynamically execute the
// rest of the commands.
executeFromDirectory(
// This is the WORKING DIRECTORY that will become the context for the rest of
// the script execution.
expandPath( "./path/to" ),
// This is the command that we are going to execute from the WORKING DIRECTORY.
// In this case, we will execute the ZIP command using RELATIVE PATHS that are
// relative to the above WORKING DIRECTORY.
"zip",
// These are the arguments to pass to the ZIP command.
[
// Regulate the speed of compression: 0 means NO compression. This is setting
// the compression method to STORE, as opposed to DEFLATE, which is the
// default method. This will apply to all files within the zip - if we wanted
// to target only a subset of file-types, we could have used "-n" to white-
// list a subset of the input files (ex, "-n .gif:.jpg:.jpeg:.png").
"-0",
// Recurse the input directory.
"-r",
// Define the OUTPUT file (our generated ZIP file).
expandPath( "./images.zip" ),
// Define the INPUT file - NOTE that this path is RELATIVE TO THE WORKING
// DIRECTORY! By using a relative directory, it allows us to generate a ZIP
// in which the relative paths become the entries in the resultant archive.
"./my-cool-images",
// Don't include files in zip.
"-x *.DS_Store"
]
);
echo( "<br />" );
echo( "Zip file size: " );
echo( numberFormat( getFileInfo( "./images.zip" ).size ) & " bytes" );
echo( "<br /><br />" );
// ------------------------------------------------------------------------------- //
// ------------------------------------------------------------------------------- //
/**
* I execute the given series of commands from the given working directory. The
* standard output is printed to the page. If an error is returned, the page request
* is aborted.
* @workingDirectory I am the working directory from whence to execute commands.
* @commandName I am the command to execute from the working directory.
* @commandArguments I am the arguments for the command.
*/
public void function executeFromDirectory(
required string workingDirectory,
required string commandName,
required array commandArguments
) {
// The Shell Script that's going to proxy the commands is expecting the working
// directory to be the first argument. As such, let's create a normalized set of
// arguments for our proxy that contains the working directory first, followed by
// the rest of the commands.
var normalizedArguments = [ workingDirectory ]
.append( commandName )
.append( commandArguments, true )
;
execute
name = expandPath( "./execute_from_directory.sh" ),
arguments = normalizedArguments.toList( " " )
variable = "local.successOutput"
errorVariable = "local.errorOutput"
timeout = 10
terminateOnTimeout = true
;
if ( len( errorOutput ?: "" ) ) {
dump( errorOutput );
abort;
}
echo( "<pre>" & ( successOutput ?: "" ) & "</pre>" );
}
/**
* I delete the given file if it exists.
* @filename I am the file being deleted.
*/
public void function cleanupFile( required string filename ) {
if ( fileExists( filename ) ) {
fileDelete( filename );
}
}
</cfscript>


As you can see, I've created an executeFromDirectory() User-Defined Function (UDF) which takes, as its first argument, the working directory from which we are going to execute the rest of the commands. Then, instead of executing the zip command directly, we are proxying it through our bash script.

And, when we run the above ColdFusion code, we get the following output:



Very cool! It worked! As you can see from the zip debug output, the entries in the archive are based on the relative paths from the working directory that we passed to our proxy.


Now that I know that the ProcessBuilder class exists, I'll probably just go with that approach in the future. That said, it was exciting (and, honestly, very frustrating) for me to write my first real bash-script to allow the CFExecute tag to execute commands from a given working directory in Lucee CFML. Bash scripting seems.... crazy; but, it also seems something worth learning a bit more about.

You Might Also Enjoy Some of My Other Posts

From Heartland Web Development

By Brian Harvey 27 April 2015

In order to create a real time dynamic IP whitelist solution for a client I needed to be able to SSH into a pfSense fiewall using ColdFusion and kick off a few .sh files to update the firewall's ip whitelist. ColdFusion doesn't have the ability to SSH directly, but by using <cfexecute>, Putty and Plink you can get the job done.

Here is how to do it:

1.  Download Putty and Plink. 
Putty is an SSH client for windows, and Plink is a command line interface to Putty.

2. Launch Putty and create a "stored session" to the target server. I named my stored session "firewall".  Now log into the remote server using the saved session so that an authentication key is generated and stored in Putty. Once you have generated an authentication key and are logged in you can exit your session and close Putty.



3. Now you can run <cfexecute> to SSH into the remote server and run .sh files.


<cfexecute name="C:\WINDOWS\system32\cmd.exe"

      arguments="/c C:\plink.exe -v root@firewall -pw MyPassword /cf/conf/putconfig.sh"  timeout="5">

</cfexecute>

There was one "gotcha" I discovered with running the command using ColdFusion.  I was able to run the plink command all day long from the cmd prompt:

C:\plink.exe -v root@firewall -pw MyPassword /cf/conf/putconfig.sh.

But when I tried to run it as an argument in <cfexecute> it would fail.  I was stumped until I came across this blog post by Ben Forta.

Ben points out that in Windows, you need to insert "/c" as the first argument in the string in order to tell Windows to to spin up a command interpreter to run and terminate upon completion. 

This Works:  arguments="/c C:\plink.exe -v root@firewall -pw MyPassword /cf/conf/putconfig.sh"  timeout="5"

This Doesn't Work:  arguments="C:\plink.exe -v root@firewall -pw MyPassword /cf/conf/putconfig.sh"  timeout="5"

That little extra had me spinning my wheels for the better part of a day until I ran across Ben's post.

ColdFusion CFSCHEDULE

Mikes Notes

  • Pipi has used <cfschedule> as a system timer since version 4 (2005)
  • Here is more information about using this system tag 
  • The sample code is in yellow

From Charlie Arehart

CF Scheduled tasks based on open-source Quartz framework, since CF10
  • Its format (like cron format for Spring f/w) adds a leading “second” value
  • Format is: second, minute, hour, day of month, month, day of week, year
  • “second” value is required, as are the rest, but “year” is optional
  • quartz-scheduler.org/documentation/quartz-2.2.2/tutorials/crontrigger.html
  • Free online tool
  • freeformatter.com/cron-expression-generator-quartz.html
  • Quartz version in CF2023 is 2.4.0
  • quartz.properties settings (in cfusion\lib\quartz) unchanged from cf10-2023
  • Customize quartz: Advanced users can customize Quartz using quartz.properties available in cfusion\lib\quartz
more in CF Scheduled Tasks: more than you may know, and should (pdf of 31 July 2023 presentation)

From Ben Nadel's Blog 

Creating A Database-Driven Scheduled Task Runner In ColdFusion

By Ben Nadel on June 11, 2023

While Dig Deep Fitness won't have much in the way of asynchronous processing (at least initially), there are some "cleanup" tasks that I need to run. As such, I've created a scheduled task as part of the application bootstrapping. This approach has served me well over the years: I create a single ColdFusion scheduled task that pulls task data out of the database and acts as the centralized ingress to all the actual tasks that need to be run.

As much as possible, I like to own the logic in my application. Which means moving as much configuration into the Application.cfc as is possible. Thankfully, ColdFusion allows for a whole host of per-application settings such as SMTP mail servers, database datasources, file mappings, etc. By using the CFSchedule tag, we can include ColdFusion scheduled task configuration right there in the onApplicationStart() event handler.

Here's a snippet of my onApplicationStart() method that sets up the single ingress point for all of my scheduled tasks. The ingress task is designed to run every 60-seconds.


component {
 public void function onApplicationStart() {

  // ... truncated code ...

  var config = getConfigSettings( useCacheConfig = false );

cfschedule(
  action = "update",
  task = "Task Runner",
  group = "Dig Deep Fitness",
  mode = "application",
  operation = "HTTPRequest",
  url = "#config.scheduledTasks.url#/index.cfm?event=system.tasks",
  startDate = "1970-01-01",
  startTime = "00:00 AM",
  interval = 60 // Every 60-seconds.
);

// ... truncated code ...

  }
}

The action="update" will either create or modify the scheduled task with the given name. As such, this CFSchedule tag is idempotent, in that it is safe to run over-and-over again (every time the application gets bootstrapped).

The CFSchedule tag has a lot of options. But, I don't really care about most of the features. I just want it to run my centralized task runner (ie, make a request to the given url) once a minute; and then, I'll let my ColdFusion application handle the rest of the logic. For me, this reduces the amount of "magic"; and leads to better maintainability over time.

To manage the scheduled task state, I'm using a simple database table. In this table, the primary key (id) is the name of the ColdFusion component that will implement the actual task logic. Tasks can either be designated as daily tasks that run once at a given time (ex, 12:00 AM); or, they can be designated as interval tasks that run once every N-minutes.

CREATE TABLE `scheduled_task` (
`id` varchar(50) NOT NULL,
`description` varchar(50) NOT NULL,
`isDailyTask` tinyint unsigned NOT NULL,
`timeOfDay` time NOT NULL,
`intervalInMinutes` int unsigned NOT NULL,
`lastExecutedAt` datetime NOT NULL,
`nextExecutedAt` datetime NOT NULL,
PRIMARY KEY (`id`) USING BTREE,
KEY `byExecutionDate` (`nextExecutedAt`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci;

This table can get more robust depending on your application needs. For example, at work, we include a "parameters" column that allows data to be passed from one task execution to another. For Dig Deep Fitness, I don't need this level of robustness (yet).

My single CFSchedule tag is setup to invoke a URL end-point. The end-point does nothing but turn around and call my Task "Workflow" component:

<cfscript>

scheduledTaskWorkflow = request.ioc.get( "lib.workflow.ScheduledTaskWorkflow" );

// ------------------------------------------------------------------------------- //

// ------------------------------------------------------------------------------- //

taskCount = scheduledTaskWorkflow.executeOverdueTasks();

</cfscript>

<cfsavecontent variable="request.template.primaryContent">

<cfoutput>
<p>
Executed tasks: #numberFormat( taskCount )#
</p>
</cfoutput>

</cfsavecontent>


As you can see, this ColdFusion template turns around and calls executeOverdueTasks(). This method looks at the database for overdue tasks; and then invokes each one as a separate HTTP call. Unlike Lucee CFML, which can have nested threads, Adobe ColdFusion cannot have nested threads (as of ACF 2021). As such, in order to allow for each separate task to spawns its own child threads (as needed), I need each task to be executed as a top-level ColdFusion page request.

Here's the part of my ScheduledTaskWorkflow.cfc that relates to the centralized ingress / overall task runner:

// ---
// PRIVATE METHODS.
// ---

/**
* Each task is triggered as an individual HTTP request so that it can run in its own
* context and spawn sub-threads if necessary.
*/
private void function makeTaskRequest( required struct task ) {

// NOTE: We're using a small timeout because we want the tasks to all fire in
// parallel (as much as possible).
cfhttp(
result = "local.results",
method = "post",
url = "#scheduledTasks.url#/index.cfm?event=system.tasks.executeTask",
timeout = 1
) {

cfhttpparam(
type = "formfield",
name = "taskID",
value = task.id
);
cfhttpparam(
type = "formfield",
name = "password",
value = scheduledTasks.password
);
}

}

The scheduledTaskService.getOverdueTasks() call is just a thin wrapper around a database call to get rows where the nextExecutedAt value is less than now. Each overdue task is then translated into a subsequent CFHttp call to an end-point that executes a specific task. Note that I am passing through a password field in an attempt to secure the task execution.

The end-point for the specific task just turns around and calls back into this workflow:

// ---
// PUBLIC METHODS.
// ---

/**
* I execute any overdue scheduled tasks.
*/
public numeric function executeOverdueTasks() {

var tasks = scheduledTaskService.getOverdueTasks();

for ( var task in tasks ) {

makeTaskRequest( task );

}

return( tasks.len() );

}

<cfscript>
scheduledTaskWorkflow = request.ioc.get( "lib.workflow.ScheduledTaskWorkflow" );
// ------------------------------------------------------------------------------- //
// ------------------------------------------------------------------------------- //
param name="form.taskID" type="string";
param name="form.password" type="string";
scheduledTaskWorkflow.executeOverdueTask( form.taskID, form.password );
</cfscript>

<cfsavecontent variable="request.template.primaryContent">
<cfoutput>

<p>
Executed task: #encodeForHtml( form.taskID )#
</p>

</cfoutput>
</cfsavecontent>

Here's the part of my ScheduledTaskWorkflow.cfc that relates to the execution of a single task. Remember that the taskID in this case is the filename for the ColdFusion component that implements the logic. I'm using my dependency injection (DI) framework to access that component dynamically in the Inversion of Control (IoC) container:

ioc.get( "lib.workflow.task.#task.id#" )

The workflow logic is fairly straightforward - I get the task, check to see if it's overdue, execute it, and then update the nextExecutedAt date (depending on weather it's a daily task or an interval task).

component {

// Define properties for dependency-injection.
// ... truncated ...

component {

// Define properties for dependency-injection.
// ... truncated code ...

// ---
// PUBLIC METHODS.
// ---

/**
* I execute the overdue scheduled task with the given ID.
*/
public void function executeOverdueTask(
required string taskID,
required string password
) {

if ( compare( password, scheduledTasks.password ) ) {

throw(
type = "App.ScheduledTasks.IncorrectPassword",
message = "Scheduled task invoked with incorrect password."
);

}

var task = scheduledTaskService.getTask( taskID );
var timestamp = clock.utcNow();

if ( task.nextExecutedAt > timestamp ) {

return;

}

lock
name = "ScheduledTaskWorkflow.executeOverdueTask.#task.id#"
type = "exclusive"
timeout = 1
throwOnTimeout = false
{

// Every scheduled task must implement an .executeTask() method.
ioc
.get( "lib.workflow.task.#task.id#" )
.executeTask( task )
;

if ( task.isDailyTask ) {

var lastExecutedAt = clock.utcNow();
var tomorrow = timestamp.add( "d", 1 );
var nextExecutedAt = createDateTime(
year( tomorrow ),
month( tomorrow ),
day( tomorrow ),
hour( task.timeOfDay ),
minute( task.timeOfDay ),
second( task.timeOfDay )
);

} else {

var lastExecutedAt = clock.utcNow();
var nextExecutedAt = lastExecutedAt.add( "n", task.intervalInMinutes );

}

scheduledTaskService.updateTask(
id = task.id,
lastExecutedAt = lastExecutedAt,
nextExecutedAt = nextExecutedAt
);

} // END: Task lock.

}

}

And that's all there is to it. Now, whenever I need to add a new scheduled task, I simply:

Create a ColdFusion component that implements the logic (via an .executeTask() method).

Add a new row to my scheduled_task database table with the execution scheduling properties.

Like I said earlier, I've been using this approach for years and I've always been happy with it. It reduces the "magic" of the scheduled task, and moves as much of the logic in the application where it can be seen, maintained, and included within the source control.

From Adobe help

"Provides a programmatic interface to the ColdFusion scheduling engine. Can run a CFML page at scheduled intervals, with the option to write the page output to a static HTML page. This feature enables you to schedule pages that publish data, such as reports, without waiting while a database transaction is performed to populate the page.

ColdFusion does not invoke Application.cfc methods, when invoking a task's event handler methods."

<cfschedule 
action = "create|modify|run|update|pause|resume|delete|pauseall|resumeall|list" 
task = "task name" 
endDate = "date" 
endTime = "time" 
file = "filename" 
interval = "seconds" 
operation = "HTTPRequest" 
password = "password" 
path = "path to file" 
port = "port number" 
proxyPassword = "password" 
proxyPort = "port number" 
proxyServer = "host name" 
proxyUser = "user name" 
publish = "yes|no" 
resolveURL = "yes|no" 
isDaily = "yes|no" 
overwrite = "yes|no" 
startDate = "date" 
startTime = "time" 
url = "URL" 
username = "user name"> 
group="group1" 
oncomplete="how to handle exception" 
eventhandler="path_to_event_handler" 
onException="refire|pause|invokeHandler" 
cronTime="time" 
repeat="number" 
priority="integer" 
exclude="date|date_range|comma-separated_dates" 
onMisfire = "" 
cluster="yes|no 
mode="server|application" 
retryCount="number" 
OR
<cfschedule
action="create"
task = "task name">

OR 
<cfschedule 
action = "modify" 
task = "task name"> 


OR 
<cfschedule 
action = "delete" 
task = "task name"> 

OR 

<cfschedule 
action = "run" 
task = "task name"> 

OR 

<cfschedule
action = "pauseAll"
group=groupname
> 

OR

<cfschedule
action = "pauseAll"
> 

OR

<cfschedule
action = "resumeAll"
mode = "server|application"
> 

OR

<cfschedule
action = "resumeAll"
group=groupname
> 

OR

<cfschedule
action = "resumeAll"
> 


OR 

<cfschedule 
action = "list" 
mode = "server|application" 
result = "res">

<cfschedule
action = "create|modify|run|update|pause|resume|delete|pauseall|resumeall|list"
task = "task name"
endDate = "date"
endTime = "time"
file = "filename"
interval = "seconds"
operation = "HTTPRequest"
password = "password"
path = "path to file"
port = "port number"
proxyPassword = "password"
proxyPort = "port number"
proxyServer = "host name"
proxyUser = "user name"
publish = "yes|no"
resolveURL = "yes|no"
isDaily = "yes|no"
overwrite = "yes|no"
startDate = "date"
startTime = "time"
url = "URL"
username = "user name">
group="group1"
oncomplete="how to handle exception"
eventhandler="path_to_event_handler"
onException="refire|pause|invokeHandler"
cronTime="time"
repeat="number"
priority="integer"
exclude="date|date_range|comma-separated_dates"
onMisfire = ""
cluster="yes|no
mode="server|application"
retryCount="number" 

OR

<cfschedule
action="create"
task = "task name">

OR 

<cfschedule
action = "modify"
task = "task name"> 

OR 

<cfschedule
action = "delete"
task = "task name"> 

OR 

<cfschedule
action = "run"
task = "task name"> 

OR 

<cfschedule
action = "pauseAll"
group = "groupname"> 

OR

<cfschedule
action = "pauseAll"> 

OR

<cfschedule
action = "resumeAll"
mode = "server|application"> 

OR

<cfschedule
action = "resumeAll"
group = "groupname"> 

OR

<cfschedule
action = "resumeAll"> 

OR 

<cfschedule
action = "list"
mode = "server|application"
result = "res">

Using Scheduler

A great demo of using a CFC

<cfschedule 
action = "create|modify|run|update|pause|resume|delete|pauseall|resumeall|list" 
task = "task name" 
endDate = "date" 
endTime = "time" 
file = "filename" 
interval = "seconds" 
operation = "HTTPRequest" 
password = "password" 
path = "path to file" 
port = "port number" 
proxyPassword = "password" 
proxyPort = "port number" 
proxyServer = "host name" 
proxyUser = "user name" 
publish = "yes|no" 
resolveURL = "yes|no" 
isDaily = "yes|no" 
overwrite = "yes|no" 
startDate = "date" 
startTime = "time" 
url = "URL" 
username = "user name"> 
group="group1" 
oncomplete="how to handle exception" 
eventhandler="path_to_event_handler" 
onException="refire|pause|invokeHandler" 
cronTime="time" 
repeat="number" 
priority="integer" 
exclude="date|date_range|comma-separated_dates" 
onMisfire = "" 
cluster="yes|no 
mode="server|application" 
retryCount="number" 
OR
<cfschedule
action="create"
task = "task name">

OR 
<cfschedule 
action = "modify" 
task = "task name"> 


OR 
<cfschedule 
action = "delete" 
task = "task name"> 

OR 

<cfschedule 
action = "run" 
task = "task name"> 

OR 

<cfschedule
action = "pauseAll"
group=groupname
> 

OR

<cfschedule
action = "pauseAll"
> 

OR

<cfschedule
action = "resumeAll"
mode = "server|application"
> 

OR

<cfschedule
action = "resumeAll"
group=groupname
> 

OR

<cfschedule
action = "resumeAll"
> 


OR 

<cfschedule 
action = "list" 
mode = "server|application" 
result = "res">
<cfschedule 
action = "create|modify|run|update|pause|resume|delete|pauseall|resumeall|list" 
task = "task name" 
endDate = "date" 
endTime = "time" 
file = "filename" 
interval = "seconds" 
operation = "HTTPRequest" 
password = "password" 
path = "path to file" 
port = "port number" 
proxyPassword = "password" 
proxyPort = "port number" 
proxyServer = "host name" 
proxyUser = "user name" 
publish = "yes|no" 
resolveURL = "yes|no" 
isDaily = "yes|no" 
overwrite = "yes|no" 
startDate = "date" 
startTime = "time" 
url = "URL" 
username = "user name"> 
group="group1" 
oncomplete="how to handle exception" 
eventhandler="path_to_event_handler" 
onException="refire|pause|invokeHandler" 
cronTime="time" 
repeat="number" 
priority="integer" 
exclude="date|date_range|comma-separated_dates" 
onMisfire = "" 
cluster="yes|no 
mode="server|application" 
retryCount="number" 
OR
<cfschedule
action="create"
task = "task name">

OR 
<cfschedule 
action = "modify" 
task = "task name"> 


OR 
<cfschedule 
action = "delete" 
task = "task name"> 

OR 

<cfschedule 
action = "run" 
task = "task name"> 

OR 

<cfschedule
action = "pauseAll"
group=groupname
> 

OR

<cfschedule
action = "pauseAll"
> 

OR

<cfschedule
action = "resumeAll"
mode = "server|application"
> 

OR

<cfschedule
action = "resumeAll"
group=groupname
> 

OR

<cfschedule
action = "resumeAll"
> 


OR 

<cfschedule 
action = "list" 
mode = "server|application" 
result = "res">

Example Code from CFDocs

Tell ColdFusion to run 'importData.cfm' daily at 7AM

<cfschedule
 action="update"
 task="importMyCSVFileToDB"
 operation="HTTPRequest"
 startDate="5/12/2016"
 startTime="7:00 AM"
 url="http://www.mydomain.com/scheduled/importData.cfm"
 interval="daily" />

Use cron time format to schedule a task

<cfschedule
 action="update"
 task="myTaskName"
 cronTime="0 */2 3-10,21-23 * * ?" />



Evolution: Fast or Slow? Lizards Help Resolve a Paradox.

Evolution: Fast or Slow? Lizards Help Resolve a Paradox.

Why does natural selection appear to happen slowly on long timescales and quickly on short ones? A multigenerational study of four lizard species addresses biology’s “paradox of stasis.”

Published by Quanta Magazine

Written by Carrie Arnold
Contributing Writer

January 2, 2024


The green anole (Anolis carolinensis), native to the United States, was one of four key lizard species in a recent study on stabilizing selection.

---

James Stroud had a problem. The evolutionary biologist had spent several years studying lizards on a small island in Miami. These Anolis lizards had looked the same for millennia; they had apparently evolved very little in all that time. Logic told Stroud that if evolution had favored the same traits over millions of years, then he should expect to see little to no change over a single generation.

Except that’s not what he found. Instead of stability, Stroud saw variability. One season, shorter-legged anoles survived better than the others. The next season, those with larger heads might have an advantage.

“I was confused. I didn’t know what was going on. I thought I was doing something wrong,” said Stroud, who was then completing a postdoc at Washington University in St. Louis. “Then it suddenly all fell into place and started to make sense.”

His data reflected a paradox that had stymied biologists for years. In the long term, the anoles had traits that appeared to stay the same, a phenomenon called stasis — presumably caused by stabilizing selection, a process which favors moderate traits. However, over the short term, the lizards showed variation, with fluctuating traits. Stroud’s data was better explained by directional selection, which sometimes favors extreme traits that lead evolution in a new direction, and other times doesn’t appear to favor anything in particular.

Because he had followed four species for three generations, he was able to show that a long-term pattern of stasis could emerge from such short-term fluctuating selection.

“There’s lots of noise, but overall, it leads to fairly stable patterns,” said Stroud, who now runs his own lab at the Georgia Institute of Technology. The study was recently published in the Proceedings of the National Academy of Sciences.

Stroud and his colleagues’ work explains how short-term variability can lead to long-term stability, said Arthur Porto, an evolutionary biologist at the Florida Museum of Natural History who was not involved in the new research.

Headshots of four lizard species in profile, from top left: a brown lizard with a short head; a green lizard with a long pointy head; a brown lizard with an orange throat fan; and a large green lizard with a white throat fan.


At Miami’s Fairchild Tropical Botanic Garden live Anolis lizards, which occupy distinct ecological niches on trees. Clockwise from top left: The bark anole (Anolis distichus), a trunk specialist; the green anole (Anolis carolinensis), a low-canopy specialist; the brown anole (Anolis sagrei), a ground specialist; and the knight anole (Anolis equestris), a high-canopy specialist.

---

“It demonstrates that we can obtain a pattern that resembles stabilizing selection, even when no stabilizing selection appears at a per-generation timescale,” Porto said. The findings help resolve what some frustrated biologists call “the paradox of stasis.”

Evolution’s Steady Hand?

When early evolutionary theorists conceived of natural selection, they reckoned that the evolutionary process works gradually over vast epochs. Species don’t evolve overnight; they largely stay the same and accumulate changes over many generations. In 1859, Charles Darwin wrote: “We see nothing of these slow changes in progress, until the hand of time has marked the long lapse of ages.”

Early observations of the fossil record supported this idea. Often, paleontologists uncovered evidence that a species could remain stagnant over millions of years, only changing when forced to adapt to some dramatic environmental shift. Most of the time, though, the process of evolution seemed achingly slow, the biological equivalent of watching paint dry.

"Most of the time, though, the process of evolution seemed achingly slow, the biological equivalent of watching paint dry."

Biologists explained this inertia as the product of stabilizing selection, in which average or intermediate traits are consistently favored over more extreme ones. Even small shifts away from “average” would be accompanied by a steep drop in survival or fertility.

A classic example of stabilizing selection comes from historic records of human birth weights, said Jonathan Losos, an evolutionary biologist at Washington University in St. Louis and Stroud’s research adviser. Compilations of birth-weight data in the mid-20th century showed that babies of average weight survived more often than those that were heavier or lighter than average.

“Long-term stasis seems to suggest stabilizing selection,” Losos said. “It’s the most favored explanation.”

It wasn’t until the early 1980s that scientists developed methods that could test this idea. In 1983, the biologists Russell Lande and Stevan Arnold brought advanced statistics to evolutionary field studies, showing in a landmark Evolution paper how researchers could measure the impact of natural selection within a single generation. The approach, which quantified selection on groups of correlated traits, required biological data sets that were very large, especially by the standards of the 1980s. Still, it was the first statistical framework to show researchers how to measure different kinds of natural selection, including stabilizing selection, on multiple traits, said Christopher Martin, an evolutionary biologist at the University of California, Berkeley.




For his multiyear study on stabilizing selection, James Stroud captured 1,692 individuals of four Anolis lizard species using a tiny lasso (left), then transported each in a plastic bag (bottom right) to a field station (top right) to collect data on traits, including weight, leg length and head size.

---

Evolutionary biologists rapidly adopted the approach. Princeton University’s Rosemary and Peter Grant used the method in their celebrated studies of Darwin’s finches on the island of Daphne Major in the Galápagos. Their study, which began in 1973 and continues to this day, followed a population of the medium ground finch (Geospiza fortis) through a severe drought that began in 1977. That’s when the plants of Daphne Major stopped producing the small seeds on which the birds relied; only thick seeds remained.

With little food, the finch population plummeted from 1,400 individuals to a few hundred in only two years. Then the Grants watched the population recover while taking careful measurements of the birds’ traits. The birds that survived, they found, had larger beaks suited to the larger seeds: The average beak depth had increased from 9.2 mm to 9.9 mm — a change of more than 7%.

All told, a shift in annual rainfall had rapidly resulted in a change in the birds’ beaks. The Grants’ work became a classic example of evolution in action. They had identified marked, if often subtle, evidence of the directional push and pull of evolution acting on traits. And they weren’t alone: Once researchers had the statistical tools to watch evolution unfold, it seemed that everywhere they looked, they could see natural selection acting within very short intervals.

Such studies challenged the idea that evolution proceeded through slow, imperceptible changes over vast time spans, said Matt Pennell, an evolutionary biologist at the University of Southern California. Change could — and did — happen quickly.

"Once researchers had the statistical tools to watch evolution unfold, they could see natural selection acting within very short intervals."

Therein lay the problem. With enough time, even the tiniest tugs should yield a measurable shift in an organism’s observable characteristics. If the beak-size changes the Grants observed continued over millennia, back-of-the-envelope calculations predicted some extreme phenomena, Pennell said. “You’d expect finches that were, like, 40 kilograms. This just makes no sense.”

What’s more, as the evidence for directional selection piled up, little proof emerged on the side of stabilizing selection. The fossil record clearly showed stasis in traits over time. But with their new statistical tools, evolutionary biologists couldn’t find evidence for a mechanism that would produce stasis.

The evidence for both short-term modification and long-term stability was sound. What biologists couldn’t figure out was how to link the two phenomena in a way that could resolve this paradox of stasis.

An explanation, it turned out, was waiting among the trees of South Florida.

An Anole Oasis

The turquoise waters and white sands of the Caribbean aren’t paradise for just humans. Anole lizards have also found these tropical isles to be idyllic havens. The lizards have spread across the Caribbean through a process called adaptive radiation. When a species of anole arrived on a new island, it rapidly evolved into several new species, each of which took advantage of a different habitat.

“There seems to be a mismatch between microevolutionary processes and what’s going on with longer timescales,” said Kjetil Lysne Voje, an evolutionary biologist at the Natural History Museum at the University of Oslo.


The first Anolis lizard to arrive on a Caribbean island evolved, over millions of years, into multiple species that fill the same ecological niches, including high canopy (left), low canopy (middle), ground (right) and trunk. The lizards on a given island are more closely related to each other than to their respective ecomorphs on other islands.

---

Over and over again, on island after island, the anoles evolved to fill different niches, gaining characteristic sets of traits to help their survival in their preferred habitat. One species kept long legs — ideal for sprinting — and small, sticky toe pads more often planted on terra firma. Three others scampered up tree trunks: a small-bodied species that preferred the lower half of the trunk, one that ventured into the low canopy on large toe pads, and one that favored the high canopy, evolving short limbs to expertly navigate thin branches.

After that initial burst of evolution, the lizards remained virtually identical over millions of years. And that’s how Losos found them when he began studying the reptiles in the 1980s.

“The different types seem to have evolved a long time ago, and then stuck there,” Losos said. “Presumably they’ve been like that ever since.”

The anoles’ ability to colonize new land made them well suited to becoming invasive species. In Florida, the native North American green anole (Anolis carolinensis) has lived high up on tree trunks, consuming arboreal insects in the low canopy, for millions of years. Over the past century, however, other anoles have arrived in the state from Cuba, Hispaniola and the Bahamas. The brown anole (Anolis sagrei) dwells on lowermost tree trunks, using its long legs to jump onto the ground to hunt insects. The small-bodied bark anole (Anolis distichus) eats ants crawling along trunks, while the larger knight anole (Anolis equestris) pursues insects and fruit in the upper canopy. Each species had already adapted to its specific niche before arriving in Miami. Their ecology persisted in their new home.

As a lizard enthusiast, Stroud wanted to study his adopted city’s herpetological smorgasbord. To conduct a long-term field study, however, he would need to track the anoles over time. The high mobility of the lizards posed a major problem. If he lost track of an individual, he wouldn’t know whether it had moved out of the area or died. Just as frustrating, he wouldn’t be able to tell if new arrivals were the offspring of existing lizards or new immigrants.

"As a lizard enthusiast, Stroud wanted to study his adopted city’s herpetological smorgasbord."

After scouring the city for sites, he realized that the location of Miami’s Fairchild Tropical Botanic Garden made it an ideal study spot because the anoles were effectively trapped on the ersatz island. He could be confident that no lizards had arrived or left.

Stroud’s goal was to measure natural selection operating over several generations in multiple species. He wanted to “catch lots of lizards and measure them and see if their survival told us anything about how evolution occurs in the wild,” he said.

He spent three years taking a variety of measures of body shape and size from the four anoles that call the botanic garden home — 1,692 individuals in total. To gather thousands of data points on leg length, head size and overall survival, Stroud had to capture each lizard using a tiny lasso and then set to work with calipers before injecting a tiny microchip under its skin. The microchip ensured that he could keep track of each individual anole. If he couldn’t detect a tracker, he knew the anole had likely died.

“This type of work is hard enough to do in one species. So to execute a project like this in four species is really exceptional,” said Jill Anderson, an evolutionary biologist at the University of Georgia who was not involved in the research.

When Stroud began analyzing his data, however, he ran smack into the paradox of stasis.

Stasis in the Noise

From the beginning of the project, Stroud and his colleagues were interested in stabilizing selection. They wanted to see if the forces of natural selection continually pushed and pulled the lizards’ traits to keep them centered on the same point. That the anoles had shown little evolutionary change over millions of years indicated that they were on some sort of evolutionary peak, and Stroud wanted to see what factors kept them there.

However, his years of data didn’t show stability at all. Instead, he saw evolution constantly shifting the traits that were best adapted to the environment. “If we look at any one period on its own, we very rarely see stabilizing selection,” Stroud said.




James Stroud and his colleagues took detailed data on many traits, including toe-pad size. To measure the lizards’ tiny toe pads, their feet were pressed onto clear plastic (left) and then the toepads were photographed from the other side (right).  Measurements were then derived from the photograph.

---

Over time, however, that variability averaged out into stasis. Even if traits wobbled off their optimal, moderate peak from one generation to the next, there was a net effect of stabilization — ultimately leading to little change over the multiple generations.

Experts who reviewed Stroud and his team’s data were impressed by its thoroughness and its ability to resolve the seeming paradox. “The data are more beautiful than anyone could reasonably even hope doing a study like this,” Martin said.

Aderson said that Stroud’s “super cool” work was able to address one of biology’s biggest mysteries because of his thoughtful and rigorous study design. Only with many years’ worth of data, she said, could Stroud see how stasis could potentially emerge out of such variability.

Voje also offered praise: “This is an excellent example of work that ties some of these observations together,” he said.

Jeffrey Conner, a botanist and evolutionary biologist at Michigan State University, agreed that the conceptual framework Stroud developed can explain stabilizing selection. However, he said that the variability in directional selection Stroud identified was fairly minimal.

"The paradox is illusory. Evolutionary biologists like to come up with things and call them paradoxes."

- Andrew Hendry, McGill University

Still, recent research from other labs also helps to support Stroud’s results. A study published in Evolution in September 2023 from the lab of Andrew Hendry, an eco-evolutionary biologist at McGill University, studied evolutionary changes in a community of finches on the Galápagos island of Santa Cruz over 17 years. There, too, Hendry found evidence of natural selection’s regular tug of war on traits that was embedded within a “remarkable stability,” he said, of the finches over evolutionary time.

To Hendry, the paradox of stasis was never a paradox at all. The issue, he said, was that biologists assumed that long-term stasis was the result of short-term stability. Throw out that assumption, and the paradox disappears. “The paradox is illusory,” he said. “Evolutionary biologists like to come up with things and call them paradoxes.”

Think of it more like the Mississippi River before it was engineered, he explained. It rapidly shifted course in small areas over short periods, and yet for tens of millions of years the river’s overall journey led to the Gulf of Mexico. Similarly, a lizard population’s traits can vary over the short term and stay stable over the long haul.

Still, three years — or 17 — are a drop in the bucket of evolutionary time. Fully resolving the paradox will require scientists to study time spans between macro- and microevolution, Porto said — on the scale of tens, hundreds or thousands of years. They need to find a sweet spot that is long enough to allow both change and stasis to emerge, he said, although at the moment biologists don’t have a long enough data set to draw from.

That’s why long-term field studies in ecology and evolutionary biology are increasingly critical, Stroud said. Without returning to his study site again and again over a period of years, he never would have obtained enough data to address one of the key hypotheses of evolutionary biology.

Mikes Notes

If Pipi is fully emergent, then in response to its environment, it should show

  • Short-term variability (noise)
  • Long-term stasis (stability)

Predictive Coding

Mikes Notes

Alex Shkotin kindly shared two links to websites with me last night.
https://www.verses.ai/research-development-roadmap , which looks close to CAS.

An interesting discussion on the Ontolog Forum https://groups.google.com/g/ontolog-forum/c/SWoioGgyx3g/m/Ut2DOEcXAQAJ

From the Verses AI website

"Biological agents are efficient, curious, self-organizing systems that anticipate the effects of their actions on the world while smoothly coping with noise and uncertainty. The human brain, while an essential source of inspiration for AI, is only one manifestation of such capacities, which characterize intelligence in nature at many scales. 

The hypothesis guiding research and development at VERSES AI is that artificial general intelligence (AGI) can be attained by discovering the deeper principles underlying biological intelligence and deploying them as design principles to construct cyber-physical ecosystems of intelligent agents in which humans are integral participants — what we call “shared intelligence”.

We originally laid out this vision for the present and future of AI in our white paper at the end of 2022. ..."

Ontolog Forum

The discussion on the Ontolog Forum was raised by John Sowa and about Dr. Karl Firston, Chief Scientist at Verses AI.




"After a bit of searching, I found more info about Verses AI and their new chief scientist.  I like the approach they're taking:  putting more emphasis on natural thinking process in neuroscience.  And their new chief scientist has publications that would lead them in that direction.  The ideas look good, and I would recommend them.  But I don't know how far he and his colleagues have gone in implementing them, or how long it will take for anything along those lines to be running in a practical system.

However, it's unlikely that any company would hire somebody as chief scientist without a considerable amount of prior work.  And I doubt that any company would make an announcement in a full-page ad in the New York Times unless they already had some kind of prototype. ..."

Mikes Notes

The approach by Verses is broadly correct.

This is a fascinating article from Verses
https://www.verses.ai/blogs/executive-summary-designing-ecosystems-of-intelligence-from-first-principles

It's the same approach I used to successfully build Pipi 9. The main difference is that Pipi is software running on a server containing hundreds of interacting agents causing emergent and adaptive properties. Pipi also sets out to provide a Complex Adaptive System (CAS) as a SAAS platform to host SAAS CAS Applications.

I did this through curiosity-led experimentation inspired by the computer modelling of biological cells. The approach by Verses is led by the research-led theory of the brain. So, I found the research reading below a possible insight into why my experiments worked. I need to do some reading to understand the theory better.

From the forum

A recent book (2022) from MIT Press with a foreword by Friston covers the field:  "Active Inference: The Free Energy Principle in Mind, Brain, and Behavior."  Chapters of that book can be downloaded for free.  Appendix C has an annotated example of the Mathlab code.

From Wikipedia

Karl John Friston FRS FMedSci FRSB (born 12 July 1959) is a British neuroscientist and theoretician at University College London. He is an authority on brain imaging and theoretical neuroscience, especially the use of physics-inspired statistical methods to model neuroimaging data and other random dynamical systems. 

Friston is a key architect of the free energy principle and active inference. In imaging neuroscience, he is best known for statistical parametric mapping and dynamic causal modelling.

In October 2022, he joined VERSES Inc, a California-based cognitive computing company focusing on artificial intelligence designed using the principles of active inference, as Chief Scientist.



Friston is one of the most highly cited living scientists and in 2016 was ranked No. 1 by Semantic Scholar in the list of top 10 most influential neuroscientists.

In the discussion, Dan Brickley of W3C, Dublin Core, Schema.org and Google Research shared this GIT hub



 repository maintained by Beren Millidge. who is a Postdoctoral Researcher in Machine Learning and Computational Neuroscience at the University of Oxford. Unravelling intelligence in both brains and machines.

Predictive Coding Paper Repository

This repository provides a list of papers that are interesting or influential about Predictive Coding. If you believe I have missed any papers, please contact me at beren@millidge.name or make a pull request with the information about the paper. I will be happy to include it.

Predictive Coding

Predictive Coding is a neurophysiologically-grounded theory of perception and learning in the brain. The core idea is that the brain always maintains a prediction of the expected state of the world, and that this prediction is then compared against the true sensory data. Where this prediction is wrong, prediction errors are generated and propagated throughout the brain. The brain's 'task' then is simply to minimize prediction errors.

The key distinction of this theory is that it proposes that prediction-errors, rather than predictions, or direct representation of sense-data is in some sense the core computational primitive in the brain.

Predictive coding originated in studies of ganglion cells in the retina, in light of theories in signal processing, about how it is much more efficient to send only 'different' or 'unpredicted signals' than repeating the whole signal every time -- see delta-encoding.

Predictive coding has several potential neurobiologically plausible process theories proposed for it -- see 'Process Theories' section, although the empirical evidence for precise prediction error minimization in the brain is mixed

Predictive coding has also been extended in several ways. It can be understood as a variational inference algorithm under a Gaussian generative model and variational distribution. It can be setup as an autoencoder (predict your input, or next-state), or else in a supervised learning fashion.

Predictive coding can also be extended to a hierarchical model of multiple predictive coding layers -- as in the brain -- as well as using 'generalised coordinates' which explicitly model the higher order derivatives a state in order to be able to explicitly model dynamical systems.

More recent work has also focused on the relationship between predictive coding and the backpropagation of error algorithm in machine learning where under certain assumptions, predictive coding can approximate this fundamental algorithm in a biologically plausible fashion. Although the exact details and conditions still need to be worked out.

There has also been much exciting work trying to merge predictive coding with machine learning to produce highly performant predictive-coding-inspired architectures.

Surveys and Tutorials

This is a great review which introduces the basics of predictive coding and its interpretation as variational inference. It also contains sample MATLAB code that implements a simple predictive coding network. I would start here.

This review walks through the mathematical framework and potential neural implementations in predictive coding, and also covers much recent work on the relationship between predictive coding and machine learning.

This is a fantastic review which presents a complete walkthrough of the mathematical basis of the Free Energy Principle and Variational Inference, and derives predictive coding and (continuous time and state) active inference. It also presents the 'full-construct' predictive coding including with hierarchical layers and generalised coordinates in an accessible fashion. I would reccomend reading this after Bogacz' tutorial (although be prepared -- it is a long and serious read)

A short and concise review of predictive coding algorithms up to 2017.

A nice review of simple predictive coding architectures with a focus on their potential implementation in the brain.

Classics

A key influential early paper proposing predictive coding as a general theory of cortical function.

One of the earliest works proposing predictive coding in the retina.

An early but complete description of predictive coding as an application of the FEP and variational inference under Gaussian and Laplace assumptions. Also surprisingly readable. This is core reading on predictive coding and the FEP

The first paper establishing the links between predictive coding and variational inference.

Makes a conjectured link between precision in predictive coding and attention in the brain.

Presents the 'full-construct' predictive coding model with both hierarchies and generalised coordinates.

Extends predictive coding to generalised coordinates, and derives the necessary inference algorithms for working with them -- i.e. DEM, dynamic expectation maximisation.

Foundational treatment of variational inference for dynamical systems, as represented in generalised coordinates. Also relates variational filtering to other non-variational schemes like particle filtering and Kalman filtering.

Andy's book is great for a high level overview, strong intuition pumps for understanding the theory, and a fantastic review of potential evidence and neuropyschiatric applications.

Neurobiological Process Theories

A key process theory paper. Proposing perhaps the default implementation of predictive coding in cortical layers.

Demonstrates that predictive coding is equivalent to popular biased competition models of neural function.

A process theory of predictive coding including action predictions which implement active inference (continuous version).

A great review delving deep into the evidence for predictive coding being implemented in the brain. Evidence is currently somewhat lacking, although the flexibility of the predictive coding framework allows it to encompass a lot of the findings here.

Neuroscience applications

Relationship to Backpropagation

PC-inspired machine learning

Extensions and Developments

This paper investigates how serveral biologically implausible aspects of the standard predictive coding algorithm -- namely requiring symmetric forward and backward weights, nonlinear derivatives, and 1-1 error unit connections can be relaxed without unduly harming performance of the network.

This paper further looks at how various implausibility of the predictive coding algorithm can be relaxed, and focuses especially on the question of how negative prediction errors could be represented, as well as invents a divisive prediction error scheme -- where prediction errors are the activities divided by the predictions.