Alchemy Neural Network Builder

Epoch

Dataset

Select a dataset to train and learn over

Features

Select which features you want as inputs into the neural network?

Click anywhere to edit.
Weight/Bias is 0.2.
This is the output from one neuron. Hover to see it larger.
The outputs are mixed with varying weights, shown by the thickness of the lines.

Output

Test loss
Training loss
Colors show data, neuron, and weight values.

Alchemy - Explainable Artificial Intelligence

By Theta Diagnostics

In order for Artificial Intelligence (AI) to be used more prolifically throughout healthcare, it must be made more transparent.

This is why Theta Diagnostics developed Alchemy: an AI platform designed to be transparent and understandable to help foster better explainability in artificial intelligence as it evolves.

Ten years ago, machine learning (ML) was generally unimodal - models ingested one type of data and performed pattern recognition or some sort of prediction. These models required hundreds of thousands of training datapoints to manifest robust enough performance. Progress in artificial intelligence (AI) research has enabled it to appeal to more complex applications. Today's applications demand recognizing patterns through multiple modalities through the use of increasingly large models. While these models have increased in performance, they have decreased in explainability and transparency. Transparency in AI is key to enable trust and collaboration with humans in highly regulated applications like healthcare.

Many technology companies are focused on creating smarter, larger and more capable models, but not more explainable ones. Their customers are not (yet) majorly focused on transparency because these customers are distracted by more intelligible performance. This increasing lack of focus on model interpretability creates an issue in healthcare where there is a large interest in AI but a lack of trust due to the aforementioned fallacy. With physician-level performance comes human-level expectations of explainability and transparency of decisions made, regardless of whether or not that decision maker is an AI or a human. With Theta, our mantra with Alchemy is that AI should be able to explain its decisions just as a human can.

Here, we have provided you with a few toy datasets that demonstrate the transparency of Alchemy. You can select a "classification" or "regression" task, configure the neural network, and watch how each configuration change influences the training result. The intent here is to show how neural networks can be made inherently transparent through the use of Alchemy.

Note that training occurs locally in your browser, not in Theta's cloud. Training data for each of the toy datasets is also generated within the browser to respect client privacy.