Predictive SoC Floorplanning Using Artificial Intelligence

What you’ll learn:

  • Pain points of the existing floor plan designing process.
  • How artificial intelligence can optimize this process to reduce the time taken from weeks to just hours.
  • Potential applications of expanding the same methodology to improve different hardware design processes.

Artificial intelligence (AI) has revolutionized many markets, including manufacturing, pharmaceutical, aerospace, etc., but hardware systems is one area that has not seen any major investment or innovation in AI to date.

While many potential machine-learning (ML) applications are possible in the end-to-end lifecycle of system-on-chip (SoC) production, this article focuses on the floorplanning phase of SoC lifecycle. Needless to say, it’s one of the most time-, cost-, and human-resource-intensive processes. Specifically, we’re going to look at evaluating the effectiveness of using ML and optimization models to exponentially reduce investment in this SoC phase.

Floor plan

A semiconductor chip consists of billions of transistors. Floorplan deals with placing these transistors along with other necessary components like clock, power rails, etc., on die. Their locations are optimized to achieve smaller chip size, better performance, avoidance of timing violations, and easier routing of wires. This crucial step in the design flow requires a gate-level netlist, constraints, technology library, timing library I/O info, and more as defined in Figure 1.

Floorplanning design typically takes several weeks to complete, though. Machine learning can potentially perform the same task in hours. This helps to bring semiconductor chips to market sooner and frees up engineers to focus on more complex work.

Machine Learning

Machine learning is a type of artificial intelligence that learns various patterns and insights from data and applies that learning to make accurate and insightful predictions. A variety of steps in the ML process are needed for floor plan optimization.

Data collection

Inputs required for floorplan, such as gate-level netlist, constraints, technology library, and I/O info, are collected from silicon proven chips.

Data pre-processing

After data collection, steps to train a ML model are initiated. The first step is to get data in the right format for training a model, which is known as data pre-processing. It includes several stages, such as data filtering, data-quality checks, data transformation, normalization, and standardization, etc.

Model training

Once data preparations are completed, the next step is to train a ML model. The goal is to predict the next component for placement on the chip while optimizing for minimum power, performance, and area (PPA). Reinforcement learning can be used to achieve this goal. It employs an iterative approach and rewards placements, which leads to minimum PPA while penalizing suggestions that increase them.

Model testing and deployment

After model training, the next step is to test the performance of the model on unseen chip blocks to validate the effectiveness of its predictions. If the results verified by engineers are satisfactory, it is ready for deployment. Chip block placement predictions made by these steps will be more efficient and faster than a traditional approach.

Additional optimization of block placement

The process can be stopped at the previous stage. However, further optimization of the entire chip block placement can be achieved by using mixed-integer-programming (MIP)-based optimization techniques. The algorithm will be set with an objective to optimize a ML-model-generated floorplan that further minimizes PPA working within specified design constraints, which are defined in the data section.

The advantage of using MIP is its ability to generate optimized solutions for different scenarios. This helps significantly when scaling the process for faster designing. A step-by-step approach of this entire process is shown in Figure 2.

Algorithms

Reinforcement learning

Reinforcement learning is a type of ML that involves taking actions and learning through a trial-and-error approach. This is achieved by rewarding actions that lead to desired behaviors, while undesirable actions are penalized.

Although there are many types of reinforcement learning algorithm types, a commonly used learning method is called Q-learning (equation defined in Fig. 3). This is when an agent receives no policy (reinforcement learning policy is a mapping from current environmental observation to a probability distribution of actions to be taken), leading to a self-directed exploration of the environment.

MIP Optimization

Mixed integer programming is an optimization technique used to solve large complex problems. It can be used to minimize or maximize an objective within defined constraints.

Example of MIP objective and constraints definition:

Value of Optimization

Using optimization techniques to overcome process bottlenecks to create an efficient system is not an alien concept. It’s been successfully applied across various industries decades ago, and its revolutionary impact is especially seen in supply-chain management, whose market size is tens of billions of dollars.

Optimizing supply-chain management using AI ensures an efficient system of manufacturing, distribution, and inventory placement within the supply chain at minimum costs. This became really apparent during COVID, when supply chains were massively affected. Companies that had adopted supply-chain optimization not only were spared the harsh impacts from COVID, but many were even able to thrive in it. Meanwhile, companies that failed to do so suffered billions of dollars in losses and still haven’t recovered.

Be wary

AI is indeed powerful, but its predictions should not be accepted blindly and must be validated by human engineers. Feedback should be provided to ML models that output erroneous floorplan that doesn’t meet constraints or isn’t optimal. However, with consistent feedback, the model does improve itself. Hardware industry should factor in the initial overhead.

Conclusion

There are many other pragmatic applications of utilizing AI (machine learning, deep learning etc.) to synthesize, analyze, simulate, deploy, and launch effective solutions throughout the hardware lifecycle with multibillion-dollar impact potential. This article has just scratched the surface by looking at one of those applications.

Similar to software tech industry, hardware tech industry leaders should also work cohesively to realize AI’s full potential in this domain. As a first step, we suggest funding for dedicated research in the field of AI and HW design in building an innovation roadmap for both the near and far future.