To get suggestions on how to configure an HEDT (High End Desktop), do not hesitate to reach out to me at MPA@pharmakoi.com or leave a message on this Blog
Massimiliano
The beauty of computational sciences lies in their power to transform abstract logic into tangible solutions, revealing patterns, simulating complexities, and driving innovation across disciplines
To get suggestions on how to configure an HEDT (High End Desktop), do not hesitate to reach out to me at MPA@pharmakoi.com or leave a message on this Blog
Massimiliano
AutoDock-GPU is a GPU-accelerated version of AutoDock, one of the most widely used molecular docking programs. Molecular docking is a computational method used to predict how a small molecule (like a drug candidate) binds to a target protein.
AutoDock-GPU speeds up the process by parallelizing computations, allowing thousands of ligand conformations to be tested rapidly. It's vital for virtual screening, where millions of compounds may be docked in silico to find the most promising drug leads.
One of the core operations in AutoDock-GPU is computing the scoring function, which estimates how well a ligand binds to a receptor. This involves many mathematical reductions (summations across arrays/vectors of energy terms).
In the original implementation, these reduction operations were done using basic GPU operations.
These were not fully optimized for newer GPU architectures, particularly NVIDIA’s Tensor Cores, which are capable of performing fused matrix-multiply-add (MMA) operations with extreme speed.
So while AutoDock-GPU was fast, its scoring function reductions were a weak link, especially given the rise of more powerful GPUs with tensor computation capabilities.
The researchers, Gabin Schieffer and Ivy Peng, introduced a new way to perform sum reduction on 4-element float vectors by translating it into a matrix multiplication task that Tensor Cores can execute extremely quickly.
Reformulated the reduction as a form of matrix operation compatible with NVIDIA’s Tensor Core acceleration hardware.
Integrated this optimized reduction back into the AutoDock-GPU codebase.
This is clever because Tensor Cores are typically used for deep learning operations (e.g., matrix-heavy tasks in neural networks). Using them to accelerate classical computational chemistry workflows is innovative and non-trivial.
The researchers tested the modified AutoDock-GPU with this new reduction method on various chemical complexes across three GPU models:
Performance of the reduction operation improved by a factor of 4× to 7×.
Overall docking time improved by 27% on average, which is substantial given that docking is a core loop in virtual screening.
This optimization makes the whole drug discovery pipeline significantly faster, especially when screening thousands to millions of compounds.
Faster Drug Discovery: Time is critical in drug development (think of pandemic response). A 27% speed-up can reduce months of computation to weeks.
Efficient GPU Utilization: Maximizing the use of GPU capabilities (like Tensor Cores) means you get more performance without additional hardware investment.
Cross-disciplinary Innovation: This work is a beautiful example of cross-pollination between AI hardware and computational chemistry, pushing the limits of both.
Feature | Description |
---|---|
Problem | AutoDock-GPU's scoring function reduction was not optimized for modern GPU hardware |
Solution | Reformulate 4-element vector reductions using Tensor Core-friendly matrix operations |
Technology | Used NVIDIA Tensor Cores (originally designed for AI) to accelerate docking |
Results | 4–7× speedup on reduction, 27% overall docking time improvement |
Impact | Faster and more efficient virtual screening in drug discovery workflows |
Perugia, April 10th 2025
Zen 4 and Zen 5 architectures use TSMC’s advanced 5nm and 4nm nodes.
AMD continues to lead in multi-core efficiency and power consumption.
The chiplet design allows AMD to scale performance well across product lines (Ryzen 5 to Ryzen 9 and Threadripper).
Intel’s 13th Gen (Raptor Lake) and 14th Gen (Meteor Lake) use a hybrid architecture with Performance (P) and Efficiency (E) cores.
Intel is transitioning to Intel 4 and Intel 3 nodes (7nm-class), improving efficiency and integrated GPU power.
Integrated Foveros 3D stacking in Meteor Lake improves on-chip communication and modularity.
🆚 Verdict: AMD leads in node maturity and thermal efficiency, while Intel pushes boundaries with hybrid and 3D chip designs.
Intel Core i9-14900K remains the king of high-FPS gaming, especially in titles optimized for high clock speeds and fewer threads.
Ryzen 7 7800X3D is the gaming darling for eSports and AAA titles thanks to its massive L3 cache via 3D V-Cache.
AMD's Ryzen 9 7950X and Threadripper CPUs dominate in content creation, video rendering, and multithreaded tasks.
Intel's chips hold their ground with higher clock speeds, making them great for single-threaded workloads and certain DAW/audio tasks.
🆚 Verdict: AMD wins in productivity-heavy and multithreaded environments, while Intel still shines in raw gaming and single-core scenarios.
AMD Ryzen 7000 and 8000 series CPUs show excellent performance-per-watt, often requiring less cooling and drawing less power under load.
Intel’s 13th/14th Gen CPUs are more power-hungry, especially under full load, which can lead to higher thermal output and the need for beefier cooling solutions.
🆚 Verdict: AMD offers better efficiency and cooler operation, making them ideal for compact or silent builds.
The AM5 socket supports DDR5 and PCIe 5.0, and AMD has committed to supporting AM5 until at least 2026.
Great for future upgrades without replacing your motherboard.
Intel’s LGA 1700 ends with 14th Gen; Arrow Lake (15th Gen) will move to LGA 1851, meaning a platform switch is required.
Intel is faster with new features, but less stable in long-term socket compatibility.
🆚 Verdict: AMD wins in long-term upgradeability; Intel offers cutting-edge features at the cost of platform churn.
Intel’s Meteor Lake CPUs include powerful Arc iGPUs and neural processing units (NPUs) optimized for AI tasks and video processing.
AMD’s Ryzen 8000 APUs with RDNA 3 iGPUs also bring solid integrated graphics, with AI capabilities expanding in the Ryzen AI series.
🆚 Verdict: Intel takes the edge in AI workloads and iGPU performance, but AMD is closing the gap.
AMD often offers better value at the mid-range (Ryzen 5 and 7), especially for multitasking and light gaming builds.
Intel still aggressively prices its chips, especially in entry-level Core i5 models, which perform well for budget-conscious gamers.
🆚 Verdict: AMD leads in overall value and efficiency; Intel counters with aggressive pricing and high-end gaming chops.
Use Case | Recommended CPU Family |
---|---|
High-End Gaming | Intel Core i7/i9 (14th Gen) |
Content Creation / Productivity | AMD Ryzen 9 / Threadripper |
Budget Builds | AMD Ryzen 5 or Intel Core i5 |
Future Upgrade Path | AMD AM5 platform |
AI / Multimedia | Intel Meteor Lake (14th Gen) |
Ultimately, the best CPU depends on your specific needs—gaming, content creation, power efficiency, or future upgrade paths. As of 2025, AMD remains a dominant force in multithreading and efficiency, while Intel maintains leadership in gaming and AI integration.
Perugia, April 9th, 2025
AutoDock Vina: A Comprehensive Overview
AutoDock Vina is a widely used molecular docking software designed for predicting the binding affinity and binding poses of small molecules (ligands) with target proteins (receptors). It is an improved version of the original AutoDock software and is known for its enhanced accuracy and significantly faster performance.
AutoDock Vina is particularly popular in the fields of drug discovery, computational chemistry, and structural biology. It is open-source and developed by The Scripps Research Institute.
High Speed and Accuracy:
Simple and Automated Workflow:
Flexible Ligand and Receptor Docking:
Multi-Core CPU Support:
Energy-Based Scoring Function:
Wide Compatibility:
AutoDock Vina performs molecular docking by following these steps:
Protein and Ligand Preparation:
Defining the Search Space (Grid Box):
Docking Process:
Result Analysis:
Drug Discovery:
Enzyme Inhibitor Design:
Protein-Ligand Interaction Studies:
Virtual Screening:
Feature | AutoDock Vina | AutoDock 4 |
---|---|---|
Speed | Faster | Slower |
Scoring Function | Empirical | Grid-based |
Ease of Use | Easier | More complex |
Multi-threading | Yes | No |
Flexible Receptor | Limited | More control |
vina --receptor protein.pdbqt --ligand ligand.pdbqt --center_x 10 --center_y 20 --center_z 15 --size_x 20 --size_y 20 --size_z 20 --out output.pdbqt
This command specifies:
AutoDock Vina is a powerful
, free, and efficient docking tool widely used in computational drug discovery. Its ease of use, speed, and improved scoring function make it a preferred choice over AutoDock 4 for many researchers.
to download it:
https://vina.scripps.edu/
To get a consultancy on your new docking project, please contact me at MPA@pharmakoi.com
Enjoy!!
Mass
The Observer Corner:
Today we dive into the ASUS TUF Gaming B850-PLUS WIFI motherboard, one of the best price/performance ration motherboard in my personal opinion.
The ASUS TUF Gaming B850-PLUS WIFI motherboard is an ATX board designed for AMD Ryzen 9000, 8000, and 7000 series processors. It features PCIe 5.0 x16 support, Wi-Fi 7, and Realtek 2.5Gb Ethernet, making it ideal for gaming and high-performance computing.
This motherboard includes ASUS TUF PROTECTION, Q-Design features for easy installation, and Aura Sync RGB headers for customization.
Take a look at the link below for more details:
https://dlcdnets.asus.com/pub/ASUS/mb/SocketAM5/TUF_GAMING_B850-PLUS_WIFI/E25809_TUF_GAMING_B850-PLUS_WIFI_UM_V2_WEB.pdf?model=TUF%20GAMING%20B850-PLUS%20WIFI
Enjoy!!
Massimiliano
Perugia, March 15th, 2025
Perugia - March 9th, 2025
The latest trends in GPU technology for fluid simulation highlight significant advancements in performance, scalability, and cost efficiency.
GPU Acceleration in Computational Fluid Dynamics (CFD)
GPUs are now an essential tool in CFD, drastically reducing simulation times. Tasks that once took an entire day on CPU servers can now be completed in just over an hour using multiple high-performance GPUs. This acceleration benefits industries such as aerospace, automotive, and pharmaceuticals, where fluid dynamics simulations play a critical role in research and development.
Scalability and Multi-GPU Configurations
Multi-GPU setups are becoming more prevalent, offering improved computational power and efficiency. FluidX3D, for example, has demonstrated a system combining Intel and NVIDIA GPUs to maximize performance while keeping costs lower than high-end single-GPU solutions. The ability to integrate GPUs from different vendors allows for more flexible and cost-effective simulation environments.
Optimized GPU Selection for Specific Workloads
Choosing the right GPU depends on the simulation requirements. Consumer-grade GPUs like the RTX 4090 are excellent for single-precision workloads, providing high performance at a lower cost. On the other hand, enterprise GPUs such as the NVIDIA H100 and A100 excel in handling double-precision and memory-intensive tasks, making them more suitable for large-scale and highly detailed simulations.
Cloud and Hybrid Deployments
Many CFD software providers, including industry leaders like Ansys and Siemens, are optimizing their tools for GPU acceleration in both on-premise and cloud-based environments. Cloud solutions powered by high-performance GPUs enable scalable, on-demand simulations, reducing infrastructure costs and increasing accessibility for researchers and engineers.
Expansion of Competition in High-Performance CFD
AMD is making strides in the high-performance computing space with its Instinct MI300X GPU, which is specifically designed to handle computationally heavy simulations. This competition provides more options for researchers and engineers, challenging NVIDIA’s dominance in the field and fostering further innovation.
Overall, GPUs are transforming fluid simulation by making it faster, more efficient, and more scalable. With continued advancements in hardware and software optimization, the future of CFD looks increasingly driven by high-performance GPU computing.
Interested to a custom-built workstation?
Send out your inquiry to MPA@pharmakoi.com indicating the overall performances you are looking for (TFlops, etc...) and you will get a free quote of a proposed configuration.
Abstract The rapid expansion of chemical and pharmaceutical literature presents both an opportunity and a challenge: while vast amounts of...