Wednesday, April 2, 2025

The Big Book of LLM

A book by Damien Benveniste of AIEdge. Though a work in progress, chapters 2 - 4 available for preview are fantastic. 

Look forward to a paperback edition, which I certainly hope to own...

Tuesday, April 1, 2025

Mozilla.ai

Mozilla pedigree, AI focus, Open-source, Dev oriented.

Blueprint Hub: Mozilla.ai's Hub of open-source templtaized customizable AI solutions for developers.

Lumigator: Platform for model evaluation and selection. Consists a Python FastAPI backend for AI lifecycle management & capturing workflow data useful for evaluation.

Friday, March 28, 2025

Streamlit

Streamlit is a web wrapper for Data Science projects in pure Python. It's a lightweight, simple, rapid prototyping web app framework for sharing scripts.

  • https://streamlit.io/playground
  • https://www.restack.io/docs/streamlit-knowledge-streamlit-vs-flask-vs-django
  • https://docs.streamlit.io/develop/concepts/architecture/architecture
  • https://docs.snowflake.com/en/developer-guide/streamlit/about-streamlit

Saturday, March 15, 2025

Scaling Laws

Quick notes around Chinchilla Scaling Law/ Limits & beyond for DeepLearning and LLMs.

Factors

  • Model size (N)
  • Dataset size (D)
  • Training Cost (aka Compute) (C)
  • Test Cross-entropy loss (L)

The intuitive way,

  • Larger data will need a larger model, and have higher training cost. In other words, N, D, C all increase together, not necessarily linearly, could be exponential, log-linear, etc.
  • Likewise Loss is likely to increase for larger datasets. So an inverse relationship between L & D (& the rest).
  • Tying them into equations would be some constants (scaling, exponential, alpha, beta, etc), unknown for now (identified later).

Beyond common sense, the theoretical foundations linking the factors aren't available right now. Perhaps the nature of the problem is it's hard (NP).

The next best thing then, is to somehow work out the relationships/ bounds empirically. To work with existing Deep Learning models, LLMs, etc using large data sets spanning TB/ PB of data, Trillions of parameters, etc using large compute budget cumulatively spanning years.

Papers by Hestness & Narang, Kaplan, Chinchilla are all attempts along the empirical route. So are more recent papers like Mosaic, DeepSeek, MoE, Llam3, Microsoft among many others. 

Key take away being,

  • The scale & bounds are getting larger over time. 
  • Models from a couple of years back, are found to be grossly under-trained in terms of volumes of training data used. They should have been trained on an order of magnitude larger training data for an optimal training, without risk of overfitting.
  • Conversely, the previously used data volumes are suited to much smaller models (SLMs), with inference capabilities similar to those older LLMs.

References

  • https://en.wikipedia.org/wiki/Neural_scaling_law
  • https://lifearchitect.ai/chinchilla/
  • https://medium.com/@raniahossam/chinchilla-scaling-laws-for-large-language-models-llms-40c434e4e1c1
  • https://bigscience.huggingface.co/blog/what-language-model-to-train-if-you-have-two-million-gpu-hours
  • https://medium.com/nlplanet/two-minutes-nlp-scaling-laws-for-neural-language-models-add6061aece7
  • https://lifearchitect.ai/the-sky-is-bigger/

Friday, February 28, 2025

Diffusion Models

Diffusion

  •     Forward, Backward (Learning), Sampling (Random)    
  •     Continous Diffusion
  •     VAE, Denoising Autoencoder
  •     Markov Chains
  •     U-Net
  •     DALL-E (OpenAI), Stable Diffusion,
  •     Imagen, Muse, VEO (Google)
  •     LLaDa, Mercury Coder (Inception)

Non-equilibrium Thermodynamics

  •     Langevin dynamics
  •     Thermodynamic Equilibrium - Boltzmann Distribution
  •     Wiener Process - Multidimensional Brownian Motion
  •     Energy Based Models

Gaussian Noise

  •     Denoising
  •     Noise/ Variance Schedule
  •     Derivation by Reparameterization

Variational Inference    

  •     Denoising Diffusion Probabilistic Model (DDPM)
  •     Noise Prediction Networks    
  •     Denoising Diffusion Implicit Model (DDIM)

Loss Functions

  •     Variational Lower Bound (VLB)
  •     Evidence Lower Bound (ELBO)
  •     Kullback-Leibler divergence (KL divergence)
  •     Mean Squared Error (MSE)

Score Based Generative Model

  •     Annealing
  •     Noise conditional score network (NCSN)
  •     Equivalence: DDPM and Score BBased Generative Models

Conditional (Guided) Generation

  •     Classifier Guidance    
  •     Classifier Free Guidance (CFG)

Latent Varible Generative Model

  •     Latent Diffusion Model (LDM)
  •     Lower Dimension (Latent) Space

References:

  • https://en.wikipedia.org/wiki/Diffusion_model
  • https://www.assemblyai.com/blog/diffusion-models-for-machine-learning-introduction
  • https://www.ibm.com/think/topics/diffusion-models
  • https://hackernoon.com/what-is-a-diffusion-llm-and-why-does-it-matter
  • Large Language Diffusion Models (LLaDA): https://arxiv.org/abs/2502.09992



Sunday, January 26, 2025

Mechanistic Interpretability

  • Clearer better understanding of Neural Networks working (white box).
  • Strong grounds for Superposition: n-dimensions (neurons) represent more than n-features

References

  • https://dynalist.io/d/n2ZWtnoYHrU1s4vnFSAQ519J#z=EuO4CLwSIzX7AEZA1ZOsnwwF
  • https://www.neelnanda.io/mechanistic-interpretability/glossary
  • https://transformer-circuits.pub/2022/toy_model/index.html
  • https://www.anthropic.com/research/superposition-memorization-and-double-descent
  • https://transformer-circuits.pub/2023/toy-double-descent/index.html 

Friday, January 24, 2025

State Space Models

  • Vector Space of States (of the System)
  • Alt. to Transformers, reducible to one another 
 
        (Image source: https://en.wikipedia.org/wiki/State-space_representation)

References

  • https://newsletter.maartengrootendorst.com/p/a-visual-guide-to-mamba-and-state
  • https://huggingface.co/blog/lbourdois/ssm-2022
  • https://huggingface.co/blog/lbourdois/get-on-the-ssm-train
  • https://en.wikipedia.org/wiki/State-space_representation

Monday, January 6, 2025

Spark API Categorization

A way to categorize Spark API features:

  • Flow of data is generally across the category swim lanes, from creation of a New Spark Context to reading data using I/O to Filter, Map/ Transform, Reduce/ Agg etc Action.
  • Lazy processing upto Transformation.
  • Steps only get executed once an Action is invoke.
  • Post Actions (Reduce, Collect, etc) there could again be I/O, thus the reverse flow from Action
  • Partition is a cross cutting concern across all layers. For I/O, Transformations, Actions could be across all or a few Partitions.
  • forEach on the Stream could be at either at Transform or Action levels.

The diagram is based on code within various Spark test suites

Thursday, January 2, 2025

Mocked Kinesis (Localstack) with PySpark Streaming

Continuing with the same PySpark (ver 2.1.0, Python3.5, etc.) setup explained in an earlier post. In order to connect to the mocked Kinesis stream on Localstack from PySpark use the kinesis_wordcount_asl.py script located in Spark external/ (connector/) folder.

(a) Update value of master in kinesis_wordcount_asl.py

Update value of master(local[n], spark://localhost:7077, etc) in SparkContext in kinesis_wordcount_asl.py:
    sc = SparkContext(appName="PythonStreamingKinesisWordCountAsl",master="local[2]")

(b) Add aSpark compiled jars to Spark Driver/ Executor Classpath

As explained in step (III) of an earlier post, to work with Localstack a few changes were done to the KinesisReceiver.scala onStart() to explicitly set endPoint on kinesis, dynamoDb, cloudWatch clients. Accordingly the compiled aSpark jars with the modifications need to be added to Spark Driver/ Executor classpath.

     export aSPARK_PROJ_HOME="/Downlaod/Location/aSpark"
    export SPARK_CLASSPATH="${aSPARK_PROJ_HOME}/target/original-aSpark_1.0-2.1.0.jar:${aSPARK_PROJ_HOME}/target/scala-2.11/classes:${aSPARK_PROJ_HOME}/target/scala-2.11/jars/*"

  •  For Spark Standalone mode: "spark.executor.extraClassPath" needs to be set in either spark-defaults.conf or added as a SparkConf to SparkContext (see (II)(a))

(c) Ensure SPARK_HOME, PYSPARK_PYTHON & PYTHONPATH variables are exported.

(d) Run kinesis_wordcount_asl

    python3.5 ${SPARK_HOME}/external/kinesis-asl/src/main/python/examples/streaming/kinesis_wordcount_asl.py SampleKinesisApplication myFirstStream http://localhost:4566/ us-east-1

    aws  --endpoint-url=http://localhost:4566 kinesis put-record --stream-name myFirstStream --partition-key 123 --data "testdata abcd"

  • Count of the words streamed (put) will show up on the kinesis_wordcount_asl console
 

Wednesday, January 1, 2025

Spark Streaming with Kinesis mocked on Localstack

In this post we get a Spark streaming application working with AWS Kinesis stream, a mocked version of Kinesis running locally on Localstack. In earlier posts we have explained how to get Localstack running and various AWS services up on Localstack. The client connections to AWS services (Localstack) is done using AWS cli and AWS Java-Sdk v1.

Environment: This set-up continues on a Ubuntu20.04, with Java-8, Maven-3.6x, Docker-24.0x, Python3.5, PySpark/ Spark-2.1.0, Localstack-3.8.1, AWS Java-Sdk-v1 (ver.1.12.778),

Once the Localstack installation is done, steps to follow are:

(I) Start Localstack
    # Start locally
    localstack start

    That should get Localstack should be running on: http://localhost:4566

(II) Check Kinesis services from CLI on Localstack

    # List Streams
    aws --endpoint-url=http://localhost:4566 kinesis list-streams

    # Create Stream
    aws --endpoint-url=http://localhost:4566 kinesis create-stream --stream-name myFirstStream --shard-count 1

    # List Streams
    aws --endpoint-url=http://localhost:4566 kinesis list-streams

    # describe-stream-summary
    aws --endpoint-url=http://localhost:4566 kinesis describe-stream-summary --stream-name myFirstStream

    # Put Record
    aws  --endpoint-url=http://localhost:4566 kinesis put-record --stream-name myFirstStream --partition-key 123 --data "testdata abcd"
    aws  --endpoint-url=http://localhost:4566 kinesis put-record --stream-name myFirstStream --partition-key 123 --data "testdata efgh"

(III) Connect to Kinesis from Spark Streaming

    # Build
    mvn install -DskipTests=true -Dcheckstyle.skip

    # Run JavaKinesisWordCountASL with Localstack

  • JavaKinesisWordCountASL SampleKinesisApplication myFirstStream http://localhost:4566/

(IV) Add Data to Localstack Kinesis & View Counts on Console
    a) Put Record from cli
    aws  --endpoint-url=http://localhost:4566 kinesis put-record --stream-name myFirstStream --partition-key 123 --data "testdata abcd"
    aws  --endpoint-url=http://localhost:4566 kinesis put-record --stream-name myFirstStream --partition-key 123 --data "testdata efgh"

    b) Alternatively Put records from Java Kinesis application
    Download, build & run AmazonKinesisRecordProducerSample.java
    
    c) Now check the output console of JavaKinesisWordCountASL run in step (III) above. Counts of the words streamed from Localstack Kinesis will be displayed on the console.