It sounds like your experiment with NeuroGC using an LSTM-based approach to predict and manage garbage collection timing based on high-level process metrics is quite interesting. Here's a summary and some insights from your findings:
Summary
- Objective: To learn application behavior and influence GC timing using only high-level process metrics.
- Approach:
- Trained an LSTM model on datasets generated by load testing with Locust under both low and high loads.
- Evaluated the performance of a system running with and without NeuroGC (i.e., with and without predictions influencing garbage collection).
Key Findings
-
Light Load:
- Training:
locust -f locustfile.py --headless -u 100 -r 10 --run-time 1m - Evaluation: Same as training.
- Results showed some reduction in disk and network pressure under medium load, indicating that the model learned to influence GC timing effectively.
- Training:
-
High Load:
- Training:
locust -f locustfile.py --headless -u 500 -r 10 --run-time 1m - Evaluation: Same as training.
- Training:
Read the full article at Towards AI - Medium
Want to create content about this topic? Use Nemati AI tools to generate articles, social posts, and more.

![[AINews] The Unreasonable Effectiveness of Closing the Loop](/_next/image?url=https%3A%2F%2Fmedia.nemati.ai%2Fmedia%2Fblog%2Fimages%2Farticles%2F600e22851bc7453b.webp&w=3840&q=75)



