Mito
Mito for Streamlit
  • Mito Documentation
  • Getting Started
    • Installing Mito
      • Fixing Common Install Errors
      • Installing Mito in a Docker Container
      • Installing Mito for Streamlit
      • Installing Mito for Dash
      • Installing Mito in a Jupyter Notebook Directly
      • Installing Mito in Vertex AI
      • Setting Up a Virtual Environment
  • Data Copilot
    • Data Copilot Core Concepts
    • Agent
    • Chat
    • Autocomplete
    • Smart Debugging
    • Configuration Options
    • Database Connectors
    • AI Data Usage FAQ
  • Apps (Beta)
    • Mito Apps
  • Mito Spreadsheet
    • Core Concepts
    • Creating a Mitosheet
      • Open Existing Virtual Environments
    • Importing Data
      • Importing CSV Files
      • Importing from Excel Files
      • Importing Dataframes
      • Importing from a remote drive
      • Import: Generated UI from any Python Function
      • Importing from other sources
    • Graphing
      • Graph Creation
      • Graph Styling
      • Graph Export
    • Pivoting/Group By
    • Filter
      • Filter By Condition
      • Filter By Value
    • Mito AI
    • Summary Statistics
    • Type Changes
    • Spreadsheet Formulas
      • Custom Spreadsheet Functions
      • Formula Reference
      • Using VLOOKUP
    • Editing Individual Cells
    • Combining Dataframes
      • Merge (horizontal)
      • Concatenate (horizontal)
      • Anti-merge (unique)
    • Sort Data
    • Split Text to Columns
    • Deleting Columns
    • Deleting Rows
    • Column Headers
      • Editing Column Headers
      • Promote Row to Header
    • Deduplicate
    • Fill NaN Values
    • Transpose
    • Reset Index
    • Unpivot a Dataframe (Melt)
    • Formatting
      • Column Formatting
      • Dataframe Colors
      • Conditional Formatting
    • Exporting Data
      • Download as CSV
      • Download as Excel
      • Generate code to create Excel and CSV reports
    • Using the Generated Code
      • Turn generated code into functions
    • Changing Imported Data
    • Code Snippets
    • Custom Editors: Autogenerate UI from Any Function
    • Find and Replace
    • Bulk column header edits
    • Code Options
    • Scheduling your Automation
    • Keyboard Shortcuts
    • Upgrading Mito
    • Enterprise Logging
  • Mito for Streamlit
    • Getting Started with Mito for Streamlit
    • Streamlit Overview
    • Create a Mito for Streamlit App
    • API Reference
      • Understanding import_folder
      • RunnableAnalysis class
      • Column Definitions
    • Streamlit App Gallery
    • Experienced Streamlit Users
    • Common Design Patterns
      • Deploying Mito for Streamlit in a Docker Image
      • Using Mito for Final Mile Data Cleaning
  • Mito for Dash
    • Getting Started
    • Dash Overview
    • Your First Dash App with Mito
    • Mito vs. Other Dash Components
    • API Reference
      • Understanding import_folder
    • Dash App Gallery
    • Common Design Patterns
      • Refresh Sheet Data Periodically
      • Change Sheet Data from a Select
      • Filter Other Elements to Data Selected in Mito
      • Graph New Data after Edits to Mito
      • Set Mito Spreadsheet Theme
  • Tutorials
    • Pass a dataframe into Mito
    • Create a line chart of time series data
    • Delete Columns with Missing Values
    • Split a column on delimiter
    • Rerun analysis on new data
    • Calculate the difference between rows
    • Calculate each cell's percent total of column
    • Import multiple tables from one Excel sheet
    • Share Mito Spreadsheets Across Users
  • Misc
    • Release Notes
      • May 28 - Just a Query Away
      • April 15 - Now Streaming (0.1.18)
      • March 21 - Smarter, Faster, Stronger Agents
      • February 25 - Agent Mode QoL Improvements
      • February 18 - Mito Agents
      • January 2nd - Inline Completions Arrive
      • December 6th - Smarter Workflow
      • November 27th - @ Mentions, Mito AI Server
      • November 4th, 2024 - Hello Mito AI
      • October 8, 2024 - JupyterLab 4
      • Aug 29th, 2024
      • June 12, 2024
      • March 19, 2024
      • March 13th, 2024
      • February 12th, 2024: Graphing Improvements
      • January 25th, 2024
      • January 5th, 2023: Keyboard Shortcuts
      • December 6, 2023: New Context Menu
      • November 28, 2023: Mito's New Toolbar
      • November 7, 2023: Multiplayer Dash
      • October 23, 2023: RunnableAnalysis class
      • October 16, 2023: Mito for Dash, Custom Editors
      • September 29, 2023: VLOOKUP and Find and Replace!
      • September 7, 2023
      • August 2, 2023: Mito for Streamlit!
      • July 10, 2023
      • May 31, 2023: Mito AI Recon
      • May 19, 2023: Mito AI Chat!
      • April 27, 2023: Generate Functions, Performance improvements, bulk column header transformations
      • April 18, 2023: Cell Editor Improvements, BYO Large Language Model, and more
      • April 10, 2023: AI Access, Excel-like Cell Editor, Performance Improvements
      • April 5, 2023: Range formulas, Pandas 2.0, Snowflake Views
      • March 29, 2023: Excel Range Import Improvements
      • March 14, 2023: Mito AI, Public Interface Versioning
      • February 28, 2023: In-place Pivot Errors
      • February 7, 2023: Excel-like Formulas, Snowflake Import
      • January 23, 2023: Excel range importing
      • January 8, 2023: Custom Code snippets
      • December 26, 2022: Code snippets and bug fixes
      • December 12, 2022: Group Dates in Pivot Tables, Reduced Dependencies
      • November 15, 2022: Filter in Pivot
      • November 9, 2022: Import and Enterprise Config
      • October 31, 2022: Replay Analysis Improvements
      • Old Release Notes
      • August 10, 2023: Export Formatting to Excel
    • Mito Enterprise Features
    • FAQ
    • Terms of Service
    • Privacy Policy
  • Mito
Powered by GitBook

© Mito

On this page
  • Why Use the RunnableAnalysis class?
  • API
  • get_param_metadata(param_type: Literal['import', 'export'])
  • run(*args, **kwargs)
  • to_json and from_json
  • Example Usage

Was this helpful?

  1. Mito for Streamlit
  2. API Reference

RunnableAnalysis class

An easier way to replay your analysis on new data

PreviousUnderstanding import_folderNextColumn Definitions

Last updated 1 year ago

Was this helpful?

The RunnableAnalysis class is returned when you specify return_type='analysis':

analysis = spreadsheet(return_type='analysis')

Why Use the RunnableAnalysis class?

Mito is build for tool for automation. When you make edits in the Mitosheet, it generates code that can be used to replay those edits across new datasets. To make that automation easier to do in your dashboard app, you can use the RunnableAnalysis class.

First, to help rerun the analysis with new data, the RunnableAnalysis class allows you to access the parameters: the things you can change when re-running the analysis. Currently, the parameter options are either import and export locations.

Furthermore, when you're ready to re-run your analysis, the RunnableAnalysis.run() function allows you to overwrite those parameters with new data. For example, you can apply the same set of edits onto two different CSV files.

To see a fully executable example, .

API

get_param_metadata(param_type: Literal['import', 'export'])

You might want to use get_param_metadata to access all of the parameters that you could override in your analysis. However, you can also filter for imports or exports if you only want to override one of those types.

This can be used for displaying input on a dashboard that can be used when rerunning the analysis.

The return type of this function is a list of ParamMetadata objects. They'll look like this:

class ParamMetadata(TypedDict):
    type: ParamType
    subtype: ParamSubtype
    required: bool
    name: str
    original_value: Optional[str]

required

Some fields are defined as required. This means they are required arguments for the run function. Because they were passed as a positional dataframe argument to the spreadsheet function, they aren't stored in the ParamMetadata.

name

This is the name of the variable for this parameter in the code. This can be used for display, but it's main use is to pass that parameter to the run function as a keyword argument.

original_value

This is the value that was originally used for this parameter when creating this analysis. The run function will default to using this if you don't pass this parameter to the function.

Type/Subtype

The ParamType and ParamSubtype types are used to describe the usage of the parameter. So the "type" of a parameter will either be 'import' or 'export', and the 'subtype' will describe whether the file is a csv or excel or was passed in other ways. The types are defined as:

ParamType = Literal[
    'import',
    'export'
]

ParamSubtype = Literal[
    'import_dataframe',
    'file_name_export_excel',
    'file_name_export_csv',
    'file_name_import_excel',
    'file_name_import_csv',
    'all' # This represents all of the above
]

Example Usage

You could use it to display file uploaders for each import in the analysis:

import streamlit as st
from mitosheet.streamlit.v1 import spreadsheet

# Set the streamlit page to wide so you can see the whole spreadsheet
st.set_page_config(layout="wide")

# Create the spreadsheet with return type 'analysis'
analysis = spreadsheet(import_folder='datasets', return_type='analysis')

# Get all of the imports parameters for that analysis
import_params = analysis.get_param_metadata('import')

# Use the parameter metadata to display the params
for param in import_params:
    st.file_uploader(param['name'])

run(*args, **kwargs)

This is the function that you'd want to call to rerun your analysis with new data. This is designed to allow for overriding the original values of each parameter. However, for imports that were passed as a positional argument to the spreadsheet function, a value will be required to be passed to this function.

The name value in the ParamMetadata should be used as the keyword for that param. So, for example:

import streamlit as st
from mitosheet.streamlit.v1 import spreadsheet

# Set the streamlit page to wide so you can see the whole spreadsheet
st.set_page_config(layout="wide")

# Create the spreadsheet with return type 'analysis'
analysis = spreadsheet(import_folder='datasets', return_type='analysis')

# Get all of the import parameters for that analysis
import_params = analysis.get_param_metadata('import')

print(import_params[0]['name'])
# Output: file_name_import_csv_0

analysis.run(file_name_import_csv_0='/path/to/new/data.csv')

to_json and from_json

For easier storage of analyses, you can use to_json and from_json to store the analysis object. For example:

import streamlit as st
from mitosheet.streamlit.v1 import spreadsheet

# Set the streamlit page to wide so you can see the whole spreadsheet
st.set_page_config(layout="wide")

# Create the spreadsheet with return type 'analysis'
analysis = spreadsheet(return_type='analysis')

analysis_json = analysis.to_json()

# Store analysis_json somewhere. Note that it should be stored securely, as it
# may contain code that edits private data
#############################

# Then, load an analysis from a file:
analysis_file_contents = <load analysis json here>

new_analysis_from_file = RunnableAnalysis.from_json(analysis_file_contents)

Example Usage

This is an example of using the RunnableAnalysis class from start to finish, including gathering new values for each parameter and creating a button to re-run the analysis on that new data.

If you want to run this code locally, make sure to have a folder called 'datasets' with the data you want to use (in the directory you're starting streamlit from).

If you use the Mitosheet to import data from the newly created datasets directory you've created, these imports will appear in the dashboard! Configuring them will rerun the analysis on new data.

import streamlit as st
import pandas as pd 
from mitosheet.streamlit.v1 import spreadsheet

# Set the streamlit page to wide so you can see the whole spreadsheet
st.set_page_config(layout="wide")

# Create an empty spreadsheet
analysis = spreadsheet(
    import_folder='datasets',
    return_type='analysis'
)

# Create an object to store the new values for the parameters
updated_metadata = {}

# Loop through the parameters in the analysis to display imports
for idx, param in enumerate(analysis.get_param_metadata()):
    new_param = None

    # For imports that are exports, display a text input
    if param['subtype'] in ['file_name_export_excel', 'file_name_export_csv']:
        new_param = st.text_input(param['name'], value=param['initial_value'], key=idx)
        
    # For imports that are file imports, display a file uploader
    elif param['subtype'] in ['file_name_import_excel', 'file_name_import_csv']:
        new_param = st.file_uploader(param['name'], key=idx)
    
    if new_param is not None:
        updated_metadata[param['name']] = new_param

# Show a button to trigger re-running the analysis with the updated_metadata
run = st.button('Run')
if run:
    result = analysis.run(**updated_metadata)
    st.write(result)
scroll to the bottom