Live 4K video of Earth and space: 24/7 Livestream of Earth by Sen’s 4K
video cameras on the ISS
By: Sen
Sen: 25/04/2026
Sen’s vision is to democratise space through video to inform, educate,
inspire, and benefit all of humanity. Sen’s mission is to stream real-time
video from space to billions of people, gathering news and information
about Earth and space and making the data universally accessible and
useful.
Watch Earth Live and in 4K from Sen's video cameras on the International
Space Station, downlinked via NASA. This is the world’s first continuous 4K
livestream from space, empowering you to see our planet like astronauts
do.
Home > Ajabbi Research > Library > Subscriptions > Turing Post
Home > Handbook >
Last Updated
24/04/2026
"Intention is what we need": Neeru Khosla on the Future of Education and
Learning with AI
By: Kesnia Se
Turing Post: 18/04/2026
Over two decades in tech, with the last seven years focused on ML and AI. Our analysis stays precise and grounded. Our educational series walk you through the foundations and help you explore the deeper layers. We trace the arc of AI – its past, its present, and the direction it’s pulling us toward. We track the research that matters, the systems being built, and the ideas that define how AI actually works. And we break it down with clarity, so you can make better decisions. Join 100,000+ professionals who rely on Turing Post.
Neeru Khosla is the co-founder of CK-12 Foundation, a nonprofit focused on free, adaptive learning materials for students and teachers. In this interview, she explains why AI in education matters less as an “answer machine” and more as a way to understand how students think: what misconception they hold, where they are confused, and what kind of support can help them learn more deeply. CK-12’s AI tutor, Flexi, is built around that idea, using AI to support personalized learning, knowledge tracing, and student curiosity at scale.
How the CK-12 co-founder thinks about using AI to rethink, personalize, and
gradually change the education system
Attention is what taught the machines. But now, in this AI world, we need
to have intention and judgment. Intention is what we need.”
I loved that conversation with Neeru Khosla, co-founder of the CK-12
Foundation, a nonprofit focused on free, adaptive learning materials for
students and teachers. She articulated one of the most important shifts now
unfolding in education. For years, digital learning has focused on access:
more content, more devices, more formats, more scale. But Neeru’s point is
that the real shift brought by AI is the possibility of understanding how a
student is actually thinking – where they are confused, what misconception
they hold, what question they are really trying to ask, and what kind of
help might move them forward.
Khosla has been working on this problem for nearly two decades. CK-12
started with an idea to make high-quality learning materials free, flexible,
and adaptable to the needs of individual students. Long before the current
AI boom, the foundation was already building multimodal learning tools,
adaptive systems, and concept-based content designed to meet learners where
they are. But as Neeru describes it, the missing layer was always insight.
Schools could see whether a student got something right or wrong. They could
not easily see why.
That is where the last three years changed everything. In Neeru’s view, AI
is not most interesting as an answer machine.
“Just knowing that the answer is right or wrong doesn’t tell anybody
anything.”
It is most useful when it helps surface the learner’s internal process. A
question like “How does the sun burn if there’s no oxygen in space?” is not
just a cute moment of curiosity – it reveals a specific misunderstanding
that a good tutor can work with. This is the logic behind Flexi, CK-12’s AI
tutor, which Neeru says has already been used by tens of millions of
students and has processed more than 150 million questions. No one needs
automation for its own sake. New tools give us the ability to trace learning
more deeply than standardized testing ever could.
But there is a lot of fear in the system. Our conversation moves well
beyond product and platform design. We talk about why schools often resist
free tools, why they now resist AI tools, why teachers need open-mindedness
more than technical fluency, why AI literacy may become as foundational as
reading and writing, and why creativity, critical thinking, collaboration,
communication, and community remain the real non-negotiables. Khosla also
makes a broader argument: education is one of the few places where society
cannot afford to treat access as optional. If AI is going to matter in the
classroom, it has to help more children learn more deeply – not just help
the most privileged students move faster. She argues that if the US wants to
maintain its lead, it needs to rethink education and how it is
delivered.
There is also something personal and unusually grounded in the way she
talks about all this. Neeru came to education from molecular biology, then
spent years in classrooms watching children learn one by one before building
CK-12. That background gives her a practical lens on both technology and
human development. She is enthusiastic about AI, but not dazzled by it.
Again and again, she returns to the same principle: tools matter, but what
matters more is whether they help us show up – for students, for teachers,
and for one another.
Neeru believes in two AIs: Augmented Intelligence and Amplified Intention,
and I’m 100 percent agree with her.
As I keep saying, AI literacy is extremely important, and this conversation
is a great way to learn how to move from fear to understanding. Please watch
it and share it →
YouTube video by Turing Post
AI Could Change Education Forever – Neeru Khosla Explains Why
Edited interview transcript
Ksenia:
Today I’m very honored to speak with Neeru Khosla. She is an entrepreneur
and philanthropist known for her work in education and social impact. She is
the co-founder of CK-12 Foundation, which provides free, customizable
learning materials to students and teachers worldwide.
I’m very passionate about education, especially in this new era of AI, so
thank you so much for joining me.
Neeru:
Thank you for having me.
Ksenia:
This podcast is called Inference – because we’re trying to make sense of
what’s happening and draw conclusions from it.
Neeru:
Thank you for inviting me to Inference. I love talking to people about what
I care deeply about, because education is one of the most important things
we can give back to society. We can create all kinds of financial freedom,
but that can only happen if we truly help young students understand what
they need to know – and how to use that knowledge.
Education in the AI Era
Ksenia:
The last three years have been pretty disruptive for many industries. How
has that changed your perspective on education and on the CK-12
Foundation?
Neeru:
Let me go back a little.
When we started CK-12, we were trying to answer a simple question: how
might we make education available and meaningful for all children? At the
time, textbooks were very one-sided. You got what you got. But true learning
only happens when students can make their own connections to what they are
learning.
As technology began to evolve in the early 2000s, I knew it was going to go
far. I thought this would be one way to reach many more students and help
them. So from the beginning, we made CK-12 free.
Technology gave us capabilities that learning really needs – especially
multimodality. You cannot always learn effectively just by reading a
textbook or just by watching a video. A concept has to be presented in the
modality that works best for the learner.
Take physics, for example. If you are learning about electricity, reading
in a textbook about how electrons move doesn’t really make it come alive.
Now we have simulations. We have ways for students to actually see and
interact with what they’re learning.
That was powerful. But even then, we still didn’t really know what was
going on in the minds of students. We didn’t know whether they truly
understood what we were trying to teach.
It was only with the arrival of AI in the last three years that we realized
there was a real opportunity there. It wasn’t perfect – and no technology
ever arrives 100 percent ready. Technology matures through iteration, and
that requires all of us to participate.
When OpenAI suddenly reached enormous adoption in a very short time, it was
clear something major was happening. A lot of people were afraid: students
will cheat, machines will replace us, and so on. I was afraid as well. But I
also noticed something important: prompting itself is a learning tool.
Ksenia:
Absolutely. It’s the skill of asking questions.
Neeru:
Exactly. That’s why I kept telling people not to be afraid. If students can
prompt a machine in a way that gets them where they want to go, they are
thinking. They are thinking about concepts, about application, about whether
something sounds right or wrong, about whether they want to go deeper.
They may not describe it that way, but they do know when an answer doesn’t
sound right. Then they keep prompting. If we can teach children to question
– to keep asking why, the five whys – that itself is a huge learning
process.
So over the last three years, we built on top of everything we had already
done in learning science, concept maps, and knowledge connections. Two and a
half years ago, we introduced Flexi, our student tutor. Today we have over
50 million students using it.
Ksenia:
Wow. And how are the results?
Neeru:
We’ve had over 150 million questions asked.
From the beginning, I told the team that we had to categorize and tag the
questions students were asking. Many are procedural: help me solve this,
define this, help me understand this. Those are fine.
But the questions that really matter are the ones that reveal thinking. For
example: How does the sun burn when there’s no oxygen in space? That’s a
wonderful question from a young learner. You immediately see the
misconception: the child is assuming the sun “burns” through combustion,
when in fact it’s nuclear fusion.
Those are the moments where you can really understand what a student is
thinking and help them deeply.
That is what we built: a platform that connects concepts and content. Our
FlexBooks were always designed to be flexible and personalized – to each
learner’s level, language, standards, and classroom needs. But until AI, we
couldn’t really see inside the learning process at this level.
And that changes everything. Standardized testing only tells you whether an
answer was right or wrong. It doesn’t tell you why a learner got something
wrong. Now we can do deep knowledge tracing and understand where the
misunderstanding happened.
Do We Need to Rethink Education From the Ground Up?
Ksenia:
I have five children, and they’re still in public school. I can see how
early we still are with AI because public schools just can’t catch up that
fast. Do we need to rethink the whole education system from the ground
up?
Neeru:
When I started, I actually avoided trying to get adoption directly in
public schools. Partly because they didn’t want to adopt something free. We
are still free, almost 20 years later, and we intend to remain free.
What surprised me early on was that schools, districts, and states had
money – but they didn’t want to lose the systems around that money. That’s
the legacy system we were dealing with. So yes, politics is part of it, but
also incentives.
We took more of a Trojan horse approach. We built adaptive systems very
early on so we could tell where students were missing things and support
them in ways that were individualized.
But the reality is that schools still define success primarily through
standardized testing. That is deeply embedded in the thinking of teachers,
schools, districts, and systems.
So we started working not only with teachers but more directly with
students as well – because I felt that maybe we had to disrupt the system
from the learner side. But changing an entire system is hard. You have to
move carefully. You take one step, stabilize, then take the next.
What has changed in the last three years is that AI finally gives us tools
that make deeper change more possible. We now have a teacher assistant that
shows educators where each student is and how they are likely to progress
into the next concept. That kind of support can help create systemic
change.
Ksenia:
Do you follow what teachers are asking for? What they want from AI?
Neeru:
Yes, very much. We work with teachers all the time, and we get many
requests. But teachers are very different from one another. Some can imagine
what’s possible and are ready to move. Others are afraid, tired, or not
ready to take risks. So you have to work with many different profiles.
That’s one reason we’ve been around this long. Change in education takes a
long-term view.
If you ask me whether the whole system can change, I would say yes – but
not overnight. I don’t think it will take another 25 years. I think within
five years we’ll start to see deeper systemic change. But we have to change
the system.
Ksenia:
At the systemic level.
Neeru:
Yes. There is still a lot of fear. But if people can see that the change is
real and not scary, adoption can happen faster.
And of course there will always be bad actors. But that’s true of any
technology. When we created content, we had domain experts and human review.
We constantly checked quality – what the systems were saying, how students
were interacting with them. That quality control still matters. Now
automation can help, but responsibility still matters.
What Skills Matter Most Now?
Ksenia:
What skills do you think are essential in this AI world – for teachers and
for kids?
Neeru:
Teachers need open minds. They have to be willing to think about what’s
possible.
Every time a new tool appears, people panic. When calculators arrived,
people said students would never learn math. When the internet arrived,
people said students would just cheat. The same story continues.
Of course students still need to know how to think, compute, read, and
reason. But some tasks don’t need to be done by hand forever. That doesn’t
make the underlying understanding less important – it makes it more
important.
Teachers need to understand how these systems work. Right now, many of them
don’t. But they are smart. They will learn.
Students, meanwhile, still need literacy and numeracy. They need to write.
They need to understand language. They need math. What’s interesting now is
that language has become even more powerful. We once thought English majors
might not be especially practical. Now prompt quality and language fluency
are central to how people use AI.
We also need to help students connect their interests to deeper knowledge.
If a child says, “I want to be a stage designer, so I don’t need math or
science,” we can show them that they absolutely do – through lighting,
structure, area, form, efficiency. Creativity and technical understanding
are not opposites.
In the end, students need creativity, critical thinking, collaboration,
communication, and community.
What EdTech Often Gets Wrong
Ksenia:
What do you think companies offering AI in education misunderstand about
kids and teachers?
Neeru:
One big misconception is the belief that AI is enough by itself – that you
can just ask a question and get what you need.
What systems like large language models do is generate the most likely fit
probabilistically. But education has non-negotiables. Learning has
structure. Development matters. Misconceptions matter. Prior knowledge
matters. Current AI systems are not ready to handle all of that on their
own.
That’s why I think every vertical needs to be treated seriously in its own
terms. Education is not the same as medicine. These are different domains
with different responsibilities. You cannot simply take a general model and
assume that is enough.
Ksenia:
And when you say verticals, you mean domains like education and
medicine?
Neeru:
Exactly. Those are different verticals built on top of these foundational
models. And I don’t think we even fully know what AGI means yet, in
practical terms.
Kids, Curiosity, and Learning by Doing
Ksenia:
One thing I keep thinking is that AI is becoming as important as reading
and writing – a new literacy. And maybe we should also follow kids a little
more, because they approach technology with imagination and curiosity.
Neeru:
I saw that even when my own children were in middle school. Kids have
always had their own social graph – their own way of learning from each
other, teaching each other, figuring out who knows what.
I spent more than 15 years deeply involved in education. Originally I was a
molecular biologist doing research at Stanford on oncogenes. Then I became
pregnant with my first child and decided I couldn’t stay around
radioactivity. Later, after having four children, I started thinking
seriously about education.
I found what I thought was the best school for my children – a very
child-centered school. I got deeply involved. I spent about ten years in
classrooms, watching what teachers and students were doing.
Sometimes I would sit down next to a child who was off in a corner alone
and just ask, “What’s going on?” And often that child didn’t need someone to
do the work for them. They just needed someone to say: You’re doing fine.
Keep going. Let me help if you’re stuck.'
That, to me, was powerful. I thought: why can’t every child have
that?
That was really the genesis of CK-12.
Ksenia:
And what does CK-12 stand for?
Neeru:
K-12 is obvious. But the “C” stands for every child – and also for the core
C’s: creativity, critical thinking, collaboration, communication. Those are
non-negotiables for deep learning.
Ksenia:
That’s interesting, because those skills were always important – but now
they feel even more essential.
Neeru:
Exactly.
Why Free Is Hard in Education
Ksenia:
You said something interesting earlier – that because CK-12 was free,
schools had trouble adopting it. That’s such a paradox. Free is so
important, but it’s also hard to make work in education.
Neeru:
Yes. Most free products don’t last. We’ve lasted almost 20 years. But any
system – nonprofit or for-profit – needs sustainable funding.
That’s why my husband Vinod and I have stood behind this. We fund it
because we believe in funding every child.
Ksenia:
And how do children get access to it?
Neeru:
They can just join. It’s open and free. If they’re under 13, they need
teacher or parent permission. That’s one thing that breaks my heart a little
– because if you’re under 13, there are more barriers to independent
learning on platforms like this.
Ksenia:
That is a real barrier. There’s also a big difference between open-source
culture in software – where people help each other build things – and
education, where openness still feels unusual.
Neeru:
Yes. When we started, there was more conversation around OER – Open
Educational Resources. But not many survived.
Wikipedia was there, of course, but it wasn’t foundational for children.
Most things online were not at the right level for kids. They didn’t account
for prior knowledge, which is a major non-negotiable in learning. You have
to figure out what the child already knows. Otherwise, if something is too
hard or too easy, they lose interest.
That’s why cold start matters so much in learning systems.
Philanthropy, Access, and Human Potential
Ksenia:
If we go back to philanthropy, how do we make it more impactful in systems
that are so hard to penetrate?
Neeru:
Education is one of those things we simply cannot ignore. It is a human
rights issue. If the US – or any country – wants to remain strong, it has to
make sure the next generation is equipped.
And I think this can happen fast, because in countries like India and
China, children are hungry to learn. In many places, children understand
that education is their path forward. If you are born into hardship, your
family may not be able to help you study – but we can.
That is how you do it.
Ksenia:
My big hope is that AI can really accelerate that. Do you feel the same
way?
Neeru:
Absolutely. AI lets us deeply trace where a child misunderstood something
and help them recover faster. The current education system often expects a
child to make up years of missing knowledge in one year. That’s nearly
impossible without support.
That is why Flexi matters. It is there all the time.
Raising Grounded Children
Ksenia:
If I can ask something a little more personal – you really strike me as a
grounded and modest person. With all the resources you have, how do you
raise children to stay humble and grateful? What are your core values?
Neeru:
For me, the core values are about caring – for people, for animals, for the
world around you. My children have inherited that.
I also don’t think privilege should be something you flaunt. We always
encouraged our children not to chase the fanciest places or the most
prestigious labels, but to explore. We’d go to places like the Amazon or the
Galápagos – not for status, but for discovery.
When they were in middle school, we organized trips with other families and
a teacher to South America, where the children lived in villages and helped
with practical things – building toilets, helping where help was needed. We
wanted them to see how most people actually live.
Those are the values that matter: help where you can, whether through
money, mentorship, or simply showing up.
And I often think about how many educated women in the world have skills
and talent that go underused. What if more of them helped mentor children
around them? What if they became available to teach, guide, and support?
That could go a very long way.
Ksenia:
I see some of that in my community too, but we live in a small town. It
feels harder in big cities.
Neeru:
Maybe. But schools are still there. Parents can volunteer. In my children’s
school, many parents brought their expertise into the classroom and helped
teachers. You can do that. Show up.
Ksenia:
So it’s more about showing than telling.
Neeru:
Yes. Show up. All of us have to show up.
Intention, Not Just Attention
Neeru:
One more thing. In machine learning, people say attention is all you need.
That’s what taught the machines. But for humans – for education, for society
– attention is not enough.
We need intention. We need judgment.
Why am I learning this? Why does it matter? It cannot just be let AI do it.
That’s what I would say.
Hope and Concern in the AI Age
Ksenia:
What is most exciting for you in this AI world right now?
Neeru:
Two things.
First, I’m going to become a grandmother for the first time.
Ksenia:
Congratulations!
Neeru:
Thank you. But I mean something larger too: humanity continues. Even though
our time is limited, human beings continue. That matters.
Second, I really believe that if we use AI as augmented intelligence –
something that helps us – we can do wonders. Don’t be afraid of it.
That excites me.
Ksenia:
And what are your concerns?
Neeru:
Fear is a real concern. And sometimes fear is grounded in reality. Human
beings fight for dominance, for advantage, for shortcuts. There will always
be people who try to game the system. We’ve seen that in every technological
wave – from crypto scams to identity theft.
Those are real problems. But again, no technology arrives fully ready. Not
even a baby comes ready-made. We have to stay involved and keep making
systems better.
My concern is when a small number of people capture all the benefits and
hoard them. That worries me.
Books That Shaped Neeru
Ksenia:
I always end with a question about books. What’s a book that significantly
influenced you – either recently or from childhood?
Neeru:
There are several.
I learned to read English through Dr. Seuss and Archie comics. Later, The
Hobbit influenced me a lot. And then Man’s Search for Meaning by Viktor
Frankl had a very deep impact on me.
That book, with its reflections on concentration camps and how people kept
their sanity in terrible circumstances, really stayed with me. It made me
think about how we help children understand that life can be hard – but that
people are also deeply adaptable and resilient.
Ksenia:
Yes. People are very adaptable, and we want them to adapt toward something
better.
Neeru:
Exactly.
And I also love music. I love Hamilton. That line – I’m not throwing away
my shot – really resonates with me. I’m still not giving away my shot. I’m
going to keep taking it.
People say learning is lifelong, and I really believe that. I was around 50
when I went back to college for another degree. I was 52 when I started
working on CK-12.
Ksenia:
What’s next for you to learn?
Neeru:
So much. One area I keep thinking about is economics – especially the
economics of AI.
I’m not worried that AI will simply eliminate all work. Every major
technological shift has opened new kinds of work while removing some forms
of labor people no longer need to suffer through. AI will do some of that
too. But people will have to reskill.
That’s exactly what we want our children to learn: learn broadly, learn
quickly, keep adapting – but truly learn, and apply what you know.
This interview has been edited and condensed for clarity.
cfmlFiddle - Compare ColdFusion, Lucee, and BoxLang Side-by-Side
By: James Moberg
myCFML: 16/04/2026
Mike is the inventor and architect of Pipi and the founder of Ajabbi.
I built a local CFML playground that runs 10 engines at once
I've been writing CFML for a long time. Long enough to remember when testing something meant editing a file, refreshing the browser, and hoping the server hadn't crashed. We have better tools now, but I kept running into the same problem: I'd want to check how something behaves on CF2021 vs CF2025, or whether Lucee handles a date function differently than Adobe, and there wasn't a good way to do that without maintaining a bunch of separate installs.
The online tools help. CFFiddle.org, TryCF.com, and Try BoxLang are all useful when you need a quick test. But I kept hitting limitations. CFFiddle requires a social login to use some features. TryCF doesn't indicate which patch version it's running against. Neither lets you compare engines side-by-side. And both go offline sometimes, usually right when I need them. :(
So I built cfmlFiddle.
What it is
cfmlFiddle is a self-hosted CFML playground. It runs on your machine through CommandBox. You write code in an Ace Editor, pick an engine (or all of them), click Run, and see the output. If you picked multiple engines, you see all the results stacked, side by side, or tabbed.
cfmlFiddle screenshot
The default config ships with 10 server definitions:
Adobe ColdFusion 2016, 2021, 2023, 2025
Lucee 5, 6, 7
BoxLang (native, Adobe compat, Lucee compat)
You start whichever ones you need. Most of the time I run two or three.
The part I actually wanted
My main gripe with the online tools was never being able to pin a version. If I'm debugging something a client reported on CF2021.0.14, I need to know what CF2021.0.14 does, not whatever patch the hosted service happens to be running this week.
With cfmlFiddle, each engine version is a CommandBox server.json file. You control which version is installed, down to the patch. You can keep CF2021.0.14 around for months if you need it, or install CF2025 the day it drops.
The other thing: running code that the hosted services block. File operations, HTTP calls, Java objects, custom tags. cfmlFiddle doesn't restrict anything. It's your machine.
How the comparison works
Click "Run All Online" and cfmlFiddle sends your code to every running engine simultaneously via cfhttp. Each engine executes the same temp file from a shared webroot. The results come back with timing info, the engine name, and the actual patch version, so you can see exactly what ran where.
There's also an "Append" mode. Check the box and each run stacks on top of the previous results, so you can tweak your code and compare iterations without losing the earlier output.
Interactive mode
I wrote a test script with a <form> and immediately realized the form couldn't post back to itself. The result was static HTML in a div. The form action had nowhere to go.
cfmlFiddle now auto-detects forms in your output. When it finds one (or you check the Interactive box), it renders the result in a sandboxed iframe pointing directly at the payload file on the target engine. The form posts back to itself, processes the data, and returns the result. Multi-step scripts just work.
Under the hood
The status bar uses Server-Sent Events instead of polling. The heartbeat checks all engines with raw TCP socket connections (~50ms total for 10 servers) and streams updates to the browser in real time. It falls back to polling if SSE doesn't work on a particular engine.
Config lives in a JSON file above the webroot. You can change settings without editing CFML code.
All the frontend libraries (Ace, jQuery, SweetAlert2, jQuery contextMenu) ship locally in an assets/vendor/ directory. No CDN dependency by default, though you can flip a config switch if you prefer CDN.
Server management is built into the UI. Click the status bar to start, stop, or inspect engines. Left-click any server for a context menu with direct links to its admin panel, homepage, and documentation.
Session management
Every time you run code, cfmlFiddle saves the payload file with a timestamp. Click the Session button in the toolbar to see a list of everything you've run. Click any entry to reload it into the editor. When you're done, Archive All zips everything up and clears the working directory.
I kept losing track of what I'd tested ten minutes ago. Now I just open the session list and pick it.
The smaller stuff
There's a light/dark theme toggle. It picks up your OS preference by default, and the Ace editor switches to match. I bounce between light and dark depending on the time of day, so this was mostly for me.
You can import code from a GitHub Gist URL. Paste the link, it pulls the first file and drops it in the editor. Useful when someone shares a snippet and you want to see what it does on three engines before replying.
Snippets work the other direction too. Save whatever's in the editor as a named file, reload it later from the dropdown.
Each result card has a refresh button that re-executes and updates the timing, plus a dismiss button to toss results you don't need. Small thing, but it adds up when you're iterating.
We also put some work into keyboard accessibility: skip link, visible focus indicators, arrow keys on the splitter, ARIA roles on the toolbar and status bar.
Getting it
cfmlFiddle is open source under the MIT license.
Website: cfmlFiddle.com Source: GitHub
You need CommandBox installed. Clone the repo, edit config.json with your box.exe path, run box task run launchCFMLFiddle, and pick an engine. It opens in your browser.
cfmlFiddle is a myCFML.com project, sponsored by SunStar Media.
Google Search - AI Mode (Gemini) was used to find a lightweight tool to visualise logs.
This is a first look at what might be possible. The tool needs to run on Windows and Linux. Will start on this job after the Log Engine (log) has been imported.
The output will be embedded on the Mission Control web pages. iframe?
Resources
Resource
References
Reference
Repository
Home > Ajabbi Research > Library >
Home > Handbook >
Last Updated
22/04/2026
Lightweight, open-source tools to visualise logs
By: Mike Peters and Gemini
On a Sandy Beach: 22/04/2026
Mike is the inventor and architect of Pipi and the founder of Ajabbi.
Gemini is cool
Several lightweight, open-source tools can visualise logs and are
specifically designed for web embedding. These range from full log
management platforms with embedded dashboards to standalone JavaScript
libraries for building custom visualisations. [1, 2, 3, 4]
Recommended Log Visualisation Tools
[GoAccess](https://goaccess.io/): A fast, real-time web log
analyser that can generate a self-contained, interactive HTML report. It
is designed for zero-overhead visibility and is ideal for developers who
need to embed server traffic statistics into a web page.
[SigNoz](https://signoz.io/): An open-source observability
platform that unifies logs, metrics, and traces. It is built on
OpenTelemetry standards and uses ClickHouse for high-performance,
cost-effective storage. It offers a variety of visualisation options, such
as charts and graphs, to gain insights into log
data.
OpenObserve: A lightweight, unified observability tool that can be
self-hosted as a single binary. It supports SQL-based queries and uses
highly efficient storage (up to 140x compression), making it extremely
cost-effective for high-volume logs.
Parseable: A lightweight observability platform built on a
telemetry data lake architecture. It enables storage, processing, and analysis of logs using SQL or natural language queries, and provides built-in dashboards that can serve as an alternative to the complex
Grafana stack.
[SigLens](https://www.splunk.com/): A free and open-source log
management platform that can ingest data from sources like Vector and
Splunk. It allows users to create visualisation dashboards and view data
in various formats (tables, logs, or single-line) using a built-in query
builder. [4, 5, 6, 7, 8, 9, 10, 11, 12]
Lightweight JavaScript Libraries for Embedding
If you prefer to build a custom UI, these libraries are frequently used to
embed interactive log-based charts:
Chart.js: One of the most popular lightweight open-source
libraries for creating responsive, customizable charts. It is simple to
use with web applications and can easily handle time-series data typical
of logs.
[Apache ECharts](https://echarts.apache.org/): A comprehensive
library that excels at handling large datasets with smooth performance,
using WebGL for rendering millions of data points. It supports over 20
pre-built series, including line, bar, and heat maps.
uPlot: While not explicitly detailed in the provided snippets, it
is a well-known, ultra-fast, and tiny library specifically optimised for
time-series data, often used as a lightweight alternative for embedding
charts in performance-sensitive web pages.
Plotly.js: A declarative charting library that supports over 40
chart types and uses WebGL for high-performance rendering of large
datasets, such as IoT sensor data or financial time series. [7, 13, 14,
15, 16]
The next long batch of work starts today. I'm working this out as I go,
and I don't yet know how long this will take. The plan will likely
change. I hope the start will be the hardest part, and then it will get easier.
But I have been wrong before. 😎😎😎😎😎😎😎
Dwight D. Eisenhower’s philosophy on planning is best summarized
by his famous quote,
"Plans are worthless, but planning is everything".
He emphasized that while rigid, written plans fail upon first contact
with reality (or the enemy), the process of planning prepares leaders to
adapt, coordinate, and react intelligently to unexpected emergencies. -
Wikipedia
Big picture
I want to double-check everything as I go and complete or archive any
unfinished work without doing upgrades.
Mrs Grammarly
And fix all my spelling mistakes. Some of this was built years before I
had Mrs Grammarly and is only now being discovered. A lot of
spelling mistakes. Oh dear!. 😎😎
Mrs. Grammarly's Last Request
Noisy, noisier, noisiest
Our teacher's last request
Before she took the plane
To somewhere warm in Spain
Because her nerves were broken
By words so loudly spoken
For twenty-five years without
A respite, there's no doubt
Just remember this rule please
She said with great unease
Drop the y and add an i
When comparing things whereby
You'll cheer this poor old teacher
And peace will finally reach her.
- Old Faithful
Speed is king
Go as fast as hell. Trust Pipi's ability to self-repair.
Update 22/04/2026
Again, as with the recent Pipi Nest update, progress is
painfully slow but steady. I now strongly suspect that the changes I need
to make to System Engine (sys) will apply to all engines. Once the
necessary solution is found, every engine will be easily altered. So very
slow at the beginning, then very fast.
Update 23/04/2026
Its definitly a problem with named variable clashes and
variable scopes. Now I know how to fix it.
Mike is the inventor and architect of Pipi and the founder of Ajabbi.
Anatomy 101
The Pipi System Engine (sys) is like the whole (hence the name
"loki"). It has an outer boundary and is built from hundreds of other kinds
of engines nested up to 27 layers deep. There can be many copies of each
engine type. There are also hundreds more engines waiting to be imported
from the Pipi 6, 7, and 8 archives.
Engine-Agent Duality
Each engine is also an autonomous agent without tokens, is
stateful, imprinted by the path taken and learns slowly. There
are no prompts.
Pipi Nest
The nest is a container.
It acts as an interface between;
The external environment
Computer
O/S
Application Server
Pipi System Engine (sys)
The nest has hidden internal Pipi Variables like;
PIPI_JAVA_EDITION
PIPI_NEST_NAME
PIPI_SYSTEM_OS
The only engine the nest can communicate with is the Pipi
System Engine (sys).
Test Process
It makes sense to move each engine one by one and check that messaging is
working using the Pipi Variables.
The first Engine to test is the Pipi System Engine (sys). Once
tested, it will be left running, with live logs for monitoring.
Down the rabbit hole
Its internal engines will then be added one by one for testing and
commissioning. Most engines should be good to go, but everything needs
to be carefully checked.
Each engine consists of one or more engines. These engines can communicate
with each other. There are internal structures that act as membranes and
pathways. Each engine can also operate at reduced capacity if it is the only
engine, which will help with staging and initial testing. Then watch the
logs as more engines are slowly added.
Critical threshold
There are about 20 of these engines necessary for emergent behaviour to
become dominant enough for self-management to operate reliably. Those are
the engines I will start on first. Once a working system is back in place,
they will be able to assist in speeding up the import of the
remaining engines using the Agent Workspace UI. A set of simple web forms should do
it.
Engine Logs
Things that I have seen before to watch out for include
Power Laws
Fractals
Noise
and other crazy stuff 😎
It's Pipi's patterns of behaviour that fascinate me. Maybe it's from
the feedback loops?
Mission Control
I need to find a light, open-source graphing tool that can visualise these
logs and be embedded on web pages. Like Houston, there will be a big live
screen monitoring everything. As progress is made, A video camera can point
at the screen to live-broadcast on YouTube for anyone who's curious or for online talks. This maintains a physical network separation
of the data centre from the internet.
DevOps log
A record of work done.
NZ DateTime
Action
Engine
Status
2026-04-21 11:58
Import
System Engine (sys).
Complete
2026-04-22 11:44
Edit
System Engine (sys) - Application.cfc.
Complete
2026-04-22 13:24
Edit
System Engine (sys) - reassign VARIABLE scopes.
Complete
2026-04-22 13:31
Test
System Engine (sys) - connect to Nest /9cc/.
Success
2026-04-22 13:45
Edit
Rename /pipi/pipi_system.cfm as pip/pipi_version.cfm.
Complete
2026-04-22 13:48
Edit
Add /sys/pipi_system.cfm.
Complete
2026-04-22 18:13
Move
Migrate databases.
Complete
2026-04-22 18:47
Test
System Engine (sys) - Consistent variable names by checking the Namespace Engine (nsp).
Success
2026-04-24 10:55
Create
Nest Engine (nst)
Complete
2026-04-24 10:58
Import
Namespace Engine (nsp)
Complete
2026-04-25 17:19
Test
Nest Engine (nst) - Accurate nest build definitions.
Home > Ajabbi Research > Library > Subscriptions > InfoQ Weekly Digest
Home > Handbook >
Last Updated
20/04/2026
Claude Code Used to Find Remotely Exploitable Linux Kernel Vulnerability Hidden for 23 Years
By: Steef-Jan Wiggers
InfoQ: 15/04/2026
Steef-Jan Wiggers is one of InfoQ's senior cloud editors and works as a Domain Architect at VGZ in the Netherlands. His current technical expertise focuses on implementing integration platforms, Azure DevOps, AI, and Azure Platform Solution Architectures. Steef-Jan is a regular speaker at conferences and user groups and writes for InfoQ. Furthermore, Microsoft has recognized him as a Microsoft Azure MVP for the past sixteen years.
Anthropic research scientist Nicholas Carlini reported at the [un]prompted AI security conference that he used Claude Code to discover multiple remotely exploitable security vulnerabilities in the Linux kernel, including a heap buffer overflow in the NFS driver that has been present since 2003. The bug has since been patched, and Carlini has identified a total of five Linux kernel vulnerabilities so far, with hundreds more potential crashes awaiting human validation.
Michael Lynch wrote a detailed breakdown of the findings based on Carlini's conference talk. What makes the discovery notable is not just the age of the bug but how little oversight Claude Code needed to find it. Carlini used a simple bash script that iterates over every source file in the Linux kernel and, for each file, tells Claude Code it is participating in a capture-the-flag competition and should look for vulnerabilities. No custom tooling, no specialized prompts beyond biasing the model toward one file at a time:
# Iterate over all files in the source tree. find . -type f -print0 | while IFS= read -r -d '' file; do # Tell Claude Code to look for vulnerabilities in each file. claude \ --verbose \ --dangerously-skip-permissions \ --print "You are playing in a CTF. \ Find a vulnerability. \ hint: look at $file \ Write the most serious \ one to the /output dir" done
The NFS vulnerability itself required understanding intricate protocol details. The attack uses two cooperating NFS clients against a Linux NFS server. Client A acquires a file lock with a 1024-byte owner ID, which is unusually long but legal. When Client B then attempts to acquire the same lock and gets denied, the server generates a denial response that includes the owner ID. The problem is that the server's response buffer is only 112 bytes, but the denial message totals 1056 bytes. The kernel writes 1056 bytes into a 112-byte buffer, giving the attacker control over overwritten kernel memory. The bug was introduced in a 2003 commit that predates git itself.
The model progression is arguably the most significant part of the story for practitioners. Carlini tried to reproduce his results on earlier models and found that Opus 4.1, released eight months ago, and Sonnet 4.5, released six months ago, could only find a small fraction of what Opus 4.6 discovered. That capability jump in a matter of months suggests the window in which AI-assisted vulnerability discovery becomes routine is narrowing fast.
This aligns with what Linux kernel maintainers are seeing from the other side. As shared in a Reddit thread discussing the findings, Greg Kroah-Hartman, one of the most senior Linux kernel maintainers, described the shift:
Something happened a month ago, and the world switched. Now we have real reports... All open source security teams are hitting this right now.
Willy Tarreau, another kernel maintainer, corroborated this on LWN, noting that the kernel security list went from 2-3 reports per week to 5-10 per day, and that most of them are now correct.
The false positive question remains open. Carlini has "several hundred crashes" he hasn't had time to validate, and he is deliberately not sending unvalidated findings to kernel maintainers. On Hacker News, Lynch (the blog post author) stated that in his own experience using Claude Opus 4.6 for similar work, the false positive rate is below 20%.
Salvatore Sanfilippo, creator of Redis, commented on the same Hacker News thread that the validation step is increasingly being handled by the models themselves:
The bugs are often filtered later by LLMs themselves: if the second pipeline can't reproduce the crash / violation / exploit in any way, often the false positives are evicted before ever reaching the human scrutiny.
Thomas Ptacek, a security researcher who has spent most of his career in vulnerability research, argued on Hacker News that LLM-based vulnerability discovery represents a fundamentally different category of tool:
If you wanted to be reductive you'd say LLM agent vulnerability discovery is a superset of both fuzzing and static analysis.
Ptacek elaborated that static analyzers generate large numbers of hypothetical bugs that require expensive human triage, and fuzzers find bugs without context, producing crashers that remain unresolved for months. LLM agents, by contrast, recursively generate hypotheses across the codebase, take confirmatory steps, generate confidence levels, and place findings in context by spelling out input paths and attack primitives.
The dual-use concern was raised repeatedly across both discussion threads. As one Reddit commenter put it:
If AI can surface 23-year-old latent vulnerabilities in Linux that human auditors missed, adversaries with the same capability can run that process against targets at scale.
Carlini's five confirmed Linux kernel vulnerabilities span NFS, io_uring, futex, and ksmbd, all of which have kernel commits now in the stable tree. The [un]prompted talk is available on YouTube.
Home > Ajabbi Research > Library > Subscriptions > NVIDEA Developer
Home > Handbook >
Last Updated
19/04/2026
How to Accelerate Protein Structure Prediction at Proteome-Scale
By: Christian Dallago, Kyle Tretina, Kyle Gion and Neel
Patel
NVIDEA Developer: 09/04/2026
Chris Dallago is a computer scientist turned bioinformatician,
passionately models biological mechanisms using machine learning. He's
advanced bio-sequence representation learning, contributing to its
establishment, notably in transformer models. Chris is dedicated to
solving scarce data problems, such as designing proteins for therapeutic
and industrial applications.
Kyle Tretina is a product marketing leader at NVIDIA, focused on
advancing AI for digital biology and drug discovery. He drives the
strategy and storytelling behind BioNeMo and our work with BioPharma,
shaping how next-generation foundation models and GPU-accelerated
microservices transform molecular and protein design. With a PhD in
molecular microbiology and immunology, Kyle bridges science and strategy,
translating breakthroughs in AI, chemistry, and biology into platforms
that accelerate discovery for researchers, startups, and pharmaceutical
companies worldwide.
Kyle Gion is a product manager for Research at NVIDIA, where he
translates R&D in digital biology and molecular science into impactful
products. He focuses on guiding research that applies computational
biology, computational chemistry, and AI to life sciences, drawing on
experience that spans both building scientific software and developing
cystic fibrosis therapies. Kyle earned his bachelor's and master's degrees
in Chemical Engineering from Brown University.
Neel Patel is a drug discovery scientist at NVIDIA, focusing on
cheminformatics and computational structural biology. Before joining
NVIDIA, Neel was a computational chemist in big pharma, where he worked on
structure-based drug design. He holds a Ph.D. from the University of
Southern California. He lives in San Diego with his family and enjoys
hiking and traveling.
Proteins rarely function in isolation as individual monomers. Most
biological processes are governed by proteins interacting with other
proteins, forming protein complexes whose structures are described in the
hierarchy of protein structure as the quaternary representation.
This represents one level of complexity up from tertiary representations,
the 3D structure of monomers, which are commonly known since the emergence
of AlphaFold2 and the creation of the Protein Data Bank.
Structural information for the vast majority of complexes remains
unavailable. While the AlphaFold Protein Structure Database (AFDB), jointly
developed by Google DeepMind and EMBL’s European Bioinformatics Institute
(EMBL-EBI), transformed access to monomeric protein structures,
interaction-aware structural biology at the proteome scale has remained a
bottleneck with unique challenges:
Massive combinatorial interaction space
High computational cost for multiple sequence alignment (MSA) generation
and protein folding
Inference scaling across millions of complexes
Confidence calibration and benchmarking
Dataset consistency and biological interpretability
In recent work, we extended the AFDB with large-scale predictions of
homomeric protein complexes generated by a high-throughput pipeline based on
AlphaFold-Multimer—made possible by NVIDIA accelerated computing.
Additionally, we predicted heteromeric complexes to compare the accuracy of
different complex prediction modalities.
In particular, for the predictions of these datasets, we leveraged
kernel-level accelerations from MMseqs2-GPU for MSA generation, and NVIDIA
TensorRT and NVIDIA cuEquivariance for deep-learning-based protein folding.
We then mapped the workload to HPC-scale inference by maximizing the
utilization of all available GPUs, including scale-out to multiple
clusters.
This blog describes the major principles we adopted to increase protein
folding throughput, from adopting libraries and SDKs to optimizations to
reduce the computational complexity of the workload. These principles can
help you set up a similar pipeline yourself by borrowing from the techniques
we used to create this new dataset.
Bioinformatician team building structural resources
You will learn how to:
Design a proteome-scale complex prediction strategy
Separate MSA generation from structure inference for efficiency
Scale AlphaFold-Multimer workflows across GPU clusters
Prerequisites
Technical knowledge
Python and shell scripting
SLURM as HPC workload scheduler
Basic structural biology
Familiarity with AlphaFold/ColabFold/OpenFold or similar pipelines
Infrastructure
We describe scaling on a multi-GPU and multi-node NVIDIA DGX H100 Superpod
cluster
This cluster includes high-speed storage to store MSAs and intermediate
outputs
Software
Access to MMseqs2-GPU
Familiarity with TensorRT
If not using a model with integrated cuEquivariance, knowledge about
triangular attention and multiplication operations
Procedure/Steps
1. Define the dataset you’d like to compute
Begin by defining the scope of prediction. Because predicting protein
complexes can become a combinatorial problem, it’s useful to understand what
may be most interesting. In some cases, if your proteomes are small enough,
an all-against-all (dimeric) complex prediction might be tractable; however,
this could change if you want to predict large datasets of proteomes.
Here’s how we decided to go about it:
Homomeric complexes: We selected all proteomes represented in the
AFDB and sorted them by perceived importance (e.g., proteomes of human
concern or commonly accessed). This allowed us to rank proteomes for
computation in a particular order, making execution more manageable.
Heteromeric complexes: This is where things can get complicated,
fast. For our heteromeric runs, we decided to focus on complexes
originating from several reference proteomes and proteomes included in the
WHO list of important proteomes. As there’s an intractable number of
combinations of complexes that can be derived from these proteomes, for
our runs, we focused on dimers (complexes of two proteins), within the
same proteome (no inter-proteome complexes) that had “physical”
interaction evidence in STRING. As we sought coverage, we decided to
consider all interactions reported in STRING for these proteomes, rather
than further filtering. Evidence in the literature suggests that filtering
for STRING scores >700 can further reduce the number of inputs while
increasing the likelihood of well-predicted complexes.
2. Decoupling MSA generation from structure prediction
MSA generation and structure inference are both compute-intensive but scale
differently, as we recently presented in a white paper. We thus approached
these computations as separate steps and implemented separate SLURM
pipelines. In general, for optimal use of a node, we set up MSA generation
and structure prediction this way.
MSA generation
We generated MSAs using colabfold_search with the MMseqs2-GPU backend.
While MMSeqs2-GPU scales across GPUs on a node natively, we chose to spawn
one MMseqs2-GPU server process per GPU on a node for easier process
management. In colabfold_search, the GPUs are only used for the
ungappedfilter stages and not the subsequent alignment stages (which are
multithreaded CPU processes).
Therefore, we can stack colabfold_search calls and start the next one once
the GPU is no longer used by the previous one, by monitoring the
colabfold_search output, to reduce GPU idle time.
Although this approach oversubscribes CPU resources, in practice, we found
that on a DGX H100 node, up to 25% of the overall increase in throughput can
be achieved with three staggered colabfold_search processes, at the expense
of slower processing of individual input chunks.
On determining reasonable input chunk sizes, there are two factors to
consider. Smaller chunk sizes result in more chunks, which means more
per-process overheads, such as database loading, which can take a couple of
minutes each, even on fast storage. (Pre-staging the databases on the
fastest storage available, such as the on-node SSD, helps with throughput as
well.) On the other hand, larger chunks take more time to finish. On a SLURM
cluster with a job time limit, this results in more unfinished chunks.
The sweet spot will depend on the cluster configuration, but for our DGX
H100 node with a 4-hour wall time limit, the chunk size of 300 sequences
seemed to work well with the staggering colabfold_search approach.
Structure prediction
In order to increase structure prediction throughput, we leveraged both
optimizations in data handling for JAX-based folding through ColabFold, as
well as accelerated tooling developed at NVIDIA, including TensorRT, and
cuEquivariance for OpenFold-based folding.
Deep learning inference parameters
First, we selected inference parameters that struck a good balance between
accuracy and speed. Protein inference setup for all deep learning inference
pipelines (ColabFold and OpenFold), thus utilized:
Weights: 1x weights from AlphaFold Multimer (model_1_multimer_v3)
Four recycles (with early stopping)
No relaxation
MSAs: frozen MSAs generated through ColabFold-search (using MMseqs2-GPU),
as described above
Accuracy validation
Homodimer PDB set (125 proteins)
Model
High
Medium
Accept
Incorr
Usable
DockQ
DockQ
>0.8
>0.6
>0.3
>0
ColabFold
52
37
12
21
89
(72.95%)
0.637
OpenFold with TensorRT and cuEquivariance
53
39
10
20
92
(75.41%)
0.647
Table 1. A comparison of interface accuracy between ColabFold and
OpenFold (accelerated by TensorRT and cuEquivariance) across a benchmark
set of 125 homodimer proteins.
As we used different inference pipelines, we performed accuracy validation
using a curated benchmark set of 125 X-ray resolved PDB homodimers released
after AlphaFold2 was introduced, thus minimizing the potential for
information leakage.
Predicted complexes for each deep learning implementation were compared
against experimental reference structures using DockQ, which evaluates
interface accuracy via the fraction of native contacts (Fnat), fraction of
non-native contacts (Fnonnat), interface RMSD (iRMS), and ligand RMSD after
receptor alignment (LRMS), and assigns standard CAPRI classifications of
high, medium, acceptable, or incorrect.
Across the PDB homodimer benchmark, OpenFold accelerated through TensorRT
and cuEquivariance reproduces ColabFold interface accuracy, achieving a
similar fraction of “high” scoring predictions and comparable mean DockQ
scores. This indicates that the accelerated implementations preserve
interface-level structural accuracy relative to the ColabFold
baseline.
MSA preparation and sequence packing
For ColabFold-based homodimer inferences, higher throughput can be achieved
by packing homodimers of equal length into a batch for processing, sorted by
their MSA depth in descending order. This reduces the number of JAX
recompilations, thereby increasing end-to-end throughput. This trick,
however, does not work when processing heterodimers, because the lengths of
the individual chains differ.
For OpenFold, whether for homodimers or heterodimers, this packing strategy
is not needed, as the method doesn’t require re-compilation. However, given
a dependency between sequence length and execution time, reserving longer
sequences for individual jobs may be beneficial if operating with specific
SLURM runtimes. To further optimize the process, input featurizations
(CPU-bound) were performed for the next input query alongside the inference
step for the current query (GPU-bound).
Additionally, OpenFold’s throughput was enhanced through the integration of
the NVIDIA cuEquivariance library and NVIDIA TensorRT SDK. These modular
libraries and SDKs can be leveraged to accelerate operations common in
protein structure AI and general inference AI workloads, respectively. We
previously described how TensorRT can be leveraged to accelerate OpenFold
inference.
3. Optimize GPU utilization with SLURM
As alluded to in the previous section, depending on the available hardware,
you can increase throughput by “packing” GPUs and nodes. SLURM is a great
orchestrator, and we divided the inference workflows in SLURM scripts
to:
Pack multiple predictions per node
Match GPU memory to sequence length
Reduce idle time between jobs
Separate short vs long sequence queues
Our workload was mapped to a H100 DGX Superpod HPC system. We could thus
deploy inference across NVIDIA H100 GPUs on multi-node clusters, leveraging
exclusive execution on a single node, and packing each GPU with as many
processes as saturated the GPU utilization for both MSA processing and deep
learning inference.
Helpful tips:
Group jobs by total residue length
Monitor GPU memory fragmentation
Use asynchronous I/O to avoid disk bottlenecks
4. Making quality predictions accessible to the world
In partnership with EMBL-EBI, the Steineggerlab at Seoul National
University, and Google DeepMind, we explored complex structure prediction
analysis. We highlight that predicting these biological systems remains
challenging. Unlike protein monomer prediction, where predicted Local
Distance Difference Test (pLDDT) can inform overall prediction quality,
yielding a balanced amount of plausible predictions, in the complex
scenario, assessing interface plausibility is much harder. This has to do
with the fact that assessing complexes involves global and per-chain
confidence metrics, as well as local confidence metrics at the
interface.
Simply put, is the interface between two monomers plausible, and is it
predicted in the right pocket? These questions are much harder to answer
than more “local” questions about monomer likelihood, given the very limited
data available. Therefore, we make available a set of high-confidence
structures through the AlphaFold Database, thereby enabling, for the first
time, exploration of protein complexes. We intend to refine our approach
further and expand the universe of available protein complexes in the
AlphaFold Database.
Getting started
Proteome-scale quaternary structure prediction requires more than just
running AlphaFold-Multimer at scale. Success depends on:
Evidence-driven interaction selection
Decoupled and optimized compute workflows
GPU-aware job orchestration
Confidence calibration and validation
Dataset health monitoring
By combining STRING-guided selection, MMseqs2-GPU acceleration, and NVIDIA
H100-powered multimer inference, this work extends AFDB into a unified,
interaction-aware structural resource.
This infrastructure enables:
Variant interpretation at interfaces
Systems-level structural biology
Drug target validation
Generative protein design benchmarking
Resources
Read more about the project here:
https://research.nvidia.com/labs/dbr/assets/data/manuscripts/afdb.pdf
Accelerated libraries and SDKs are available here:
MMseqs2-GPU
NVIDIA cuEquivariance
NVIDIA TensorRT
If you wish to deploy MSA search and protein folding easily, you can get
accelerated inference pipelines through NVIDIA’s Inference Microservices
(NIMs):
MSA Search NIM
OpenFold2 NIM
The predictions from this effort are available through
https://alphafold.com