1. @David Burnett

  2. @Fil Maksimovic

  3. Daniel Finell

  4. @Titan Yuan


  • Discussed potential projects with Daniel

    • Zeroing in on demonstrating an efficient fixed-point implementation of a neural network that fits in 64 kB data memory. Include ability to speed up and slow down CPU when processing is needed.

    • Can we scale the processing to accommodate less memory (cuts down on leakage) and low peak power (enables continuous, but slow, processing when connected to energy scavenging sources)?

    • What NN task should serve as a demonstrator?

    • @Titan Yuan may have a fixed point implementation somewhere

    • tinyML is a good project to look at too: https://tinyml.mit.edu/

  • @Titan Yuan on his way to ARPA-E summit

  • @Fil Maksimovic we need to have a small meeting to determine the scope of the SCuM workshop

    • Tentatively set for 9am Pacific this Thurs