GNU/Linux xterm-256color bash 119 views

This experiment was done on a Minisforum MS-01 with an Intel Core-i5-12600. Using:

  • 2x Intel Optane M10 16GB, one connected via Thunderbird other via M.2. Gen3 x2 (~1.175M IOPS each)
  • 1x Intel Optane M10 32GB, connected via M.2. Gen4 x4 (~1.3M IOPS each)
  • 1x Intel Optane M10 64GB, connected via pcie-port-riser (~1.45M IOPS each)

Notes:

  • The CPU has performance and efficiency cores
  • Core 15 is an efficiency core; it cannot go beyond 3.3 GHz where the others can go to 4.5GHz
  • bdevperf is running on core 15

Observations:

  • Frequency scaling bumps the efficiency core up to 3.3GHz
  • Bottlenecked by not having enough devices, not the CPU, this can still do more IOPS

Future Work:

  • Add core-isolation (no interrupts on IO-cores), for reduced variation
  • There is room to add another 1.17M IOPS M.2 device via Thunderbolt, but I don’t think that will hit the wall, but it might..
  • Experiment with downclocking, scaling down to 1, 1.5, 2.0, 2.4, 2.7 GHz, since at 3.3 GHz the CPU is not the bottleneck on this system, so we can simulate having a slower CPU, especially interesting to see what is possible at the 1-2 GHz range, since that is the speed that most GPUs are clocked at depending on boost / sustained load etc.
  • Also, pinning the frequency to match high-core count CPUs like the EPYCs would provide insight into how scaling out to multiple cores
  • This should also be redone, without any form of turbo-boost, since it gives deceitful numbers when scaling out. e.g. 1 core at 4.5GHz will probably do must better than multiple at 2GHz.

More recordings by safl

Browse all

pesi: qenv setup 2:33

by safl

pesi: zrofs 2:04

by safl

pesi: qemu build 1:48

by safl