diff --git a/src/SUMMARY.md b/src/SUMMARY.md index e3e4733c..29b8ef2e 100644 --- a/src/SUMMARY.md +++ b/src/SUMMARY.md @@ -37,6 +37,7 @@ - [James Jin](design_notebooks/2024fall/gj2148.md) - [James Xie](design_notebooks/2024fall/hx2227.md) - [Joshua Cho](design_notebooks/2024fall/jsc9820.md) + - [Kyle Liu](design_notebooks/2024fall/kl4402.md) - [Noah Mays-Smith](design_notebooks/2024fall/nm4207.md) - [Ruichan Gao](design_notebooks/2024fall/rg4238.md) - [Uma Nachiappan](design_notebooks/2024fall/un2021.md) diff --git a/src/design_notebooks/2024fall/00_toc.md b/src/design_notebooks/2024fall/00_toc.md index 06b39b81..b9850d51 100644 --- a/src/design_notebooks/2024fall/00_toc.md +++ b/src/design_notebooks/2024fall/00_toc.md @@ -17,6 +17,8 @@ * [Joshua Cho](jsc9820.md) + * [Kyle Liu](kl4402.md) + * [Noah Mays-Smith](nm4207.md) * [Ruichan Gao](rg4238.md) diff --git a/src/design_notebooks/2024fall/kl4402.md b/src/design_notebooks/2024fall/kl4402.md new file mode 100644 index 00000000..5a2537ed --- /dev/null +++ b/src/design_notebooks/2024fall/kl4402.md @@ -0,0 +1,33 @@ +## Week of 9 September 2024 + +Project Work: + * The VIP processor design team commenced the project with a kickoff meeting to establish a shared vision. Team members discussed high-level goals, including performance, power consumption, and integration challenges. By the end of the week, a comprehensive list of requirements was created, serving as a roadmap for the project. + + * To ensure all aspects were covered, the team reached out to various stakeholders, including software engineers and system architects. These discussions helped clarify expectations and highlighted potential issues that could arise during development. The team documented all insights to refine the initial project scope. + + * A timeline was drafted to outline key milestones and deliverables for the project. This timeline included deadlines for design, implementation, and testing phases. The team felt confident as they laid the groundwork for a structured approach moving forward. + + +## Week of 16 September 2024 + * In the second week, the team focused on exploring different architectural options for the L1 instruction cache. They analyzed various configurations, such as direct-mapped, set-associative, and fully associative caches. Each option was evaluated based on its impact on speed, complexity, and power consumption. + + * Team members engaged in collaborative brainstorming sessions to weigh the pros and cons of each architectural choice. This collaborative environment fostered creativity and allowed for a diverse range of ideas to surface. Ultimately, the team selected a hybrid approach that balanced performance with practical implementation challenges. + + * Prototypes of the chosen configurations were developed using simulation tools to assess their performance. These simulations provided valuable data on expected hit rates and latency. The results fueled discussions on further refinements and optimizations. + + +## Week of 23 September 2024 + + * During the third week, the team transitioned from design to the implementation phase, coding the cache logic. Each member was assigned specific components, such as the cache controller and data paths, to streamline the development process. This modular approach allowed for parallel work and faster progress. + + * Integration of the cache with the existing processor pipeline began, focusing on ensuring seamless data flow. Initial integration tests were run to verify that the cache operated correctly within the broader system architecture. Early testing revealed a few integration issues that were promptly addressed, setting the stage for smoother subsequent phases. + + * The team conducted regular check-ins to monitor progress and address any technical challenges that arose during implementation. This iterative approach ensured that any problems were quickly identified and resolved. The week concluded with a sense of accomplishment as the foundational elements of the cache were in place. + +## Week of 30 September 2024 + + * The fourth week centered on performance analysis, utilizing advanced simulation tools to evaluate the cache's effectiveness under various workloads. Key metrics, such as cache hit rates and access latencies, were rigorously analyzed to identify potential bottlenecks. The team recognized that even small optimizations could lead to significant performance improvements. + + * Adjustments were made to the cache's replacement policies and pre-fetching algorithms based on the analysis results. These changes aimed to enhance cache efficiency and reduce access times significantly. The iterative testing process revealed a marked improvement in performance metrics. + + * As the week progressed, the team collaborated closely to discuss the implications of their findings on overall processor design. Regular updates ensured that everyone was aligned on the direction of refinements. By the end of the week, the team had a clearer path forward, backed by solid data-driven insights.