Cache slow operations

10x your developer speed by caching expensive function calls.
While working on a Job or a Page you will likely run the same data fetching and computations repeatedly. By applying the decorator @db.cache to a function, the result of calling that function will be cached to disk and fetched from disk on the next run with the same arguments. If the function implementation has changed it will run the next time. However, be aware that changes to other functions or global variables are not detected.
import hashlib
import databutton as db
import requests
def fetch_data(n: int) -> bytes:
return requests.get(f"{n}").content
def compute(data: bytes) -> str:
return hashlib.sha256(data).hexdigest()
for _ in range(2):
for n in (100, 1000):
print(n, compute(fetch_data(n)))
Input and output variables must be of types supported by Python's pickle module for this to work. Cache is on a temporary disk and will be gone whenever your data app's cloud development environment shuts down.
NB! This is an experimental feature. In particular, there is no cache eviction or garbage collection yet—meaning careless use can fill up the disk in your project's cloud development environment. For now, we recommend to only use it to speed up development work and then disable it before scheduling jobs and deploying pages. See Broken link to learn how to use Streamlit's native cache @st.cache.