更多信息稍后添加。
4月14日上午会议室:多学科124
https://meeting.tencent.com/dm/haRz08kM4rJ7
#腾讯会议:569-2881-3028
4月14日下午会议室:多学科122
https://meeting.tencent.com/dm/haRz08kM4rJ7
#腾讯会议:569-2881-3028
4月15日上午 session 1 : 多学科122
https://meeting.tencent.com/dm/sFBp0M90Mz9C
#腾讯会议:717-346-245
4月15日上午 session 2:多学科228
https://meeting.tencent.com/dm/HDS2NKxYI0Gj
#腾讯会议:511-900-966
4月15日下午:多学科124
https://meeting.tencent.com/dm/haRz08kM4rJ7
4月16日上午:session 1:多学科122
https://meeting.tencent.com/dm/tXHCYc42y0Nc
#腾讯会议:634-362-574
4月16日上午:session 2:多学科228
https://meeting.tencent.com/dm/jwC9XBrOmDnU
#腾讯会议:222-289-556
会议室:多学科122
会议室:多学科122
https://meeting.tencent.com/dm/sFBp0M90Mz9C
会议室多学科228
https://meeting.tencent.com/dm/HDS2NKxYI0Gj
Super Tau Charm Facility(STCF), as a new-generation high-luminosity collider experiment, places higher demands on the generation of large-scale Monte Carlo samples.MC simulation, especially for electromagnetic calorimeter(ECAL), requires substantial computational resources. Traditional Geant4 approach simulates the secondary particles and interactions of electromagnetic showers in ECAL. But with the development of machine learning, generative adversarial network(GAN) can directly generate information such as energy deposition maps from the particles’ inject conditions. Applying GAN to ECAL fast simulation in STCF experiment can significantly reduce computational cost while maintaining high accuracy.
This talk presents the motivation and methodology of developing and optimizing GAN for ECAL fast simulation in STCF, and provides a comparison between Geant4 and GAN in terms of simulation results and computational efficiency.
会议室:多学科122
https://meeting.tencent.com/dm/sFBp0M90Mz9C
高速X射线原位成像已被广泛应用于增材制造内部缺陷检测,为制造过程中缺陷演化的实时观测提供了有效手段。然而,该技术会产生海量图像数据,人工分析整个数据集内的缺陷演化过程不仅耗时费力,还易受主观误差影响,严重制约了缺陷检测的效率与准确性。为解决这一问题,本研究采用深度学习技术,实现铝合金增材制造过程中裂纹的智能检测与量化分析。以铝合金增材制造过程中产生的裂纹为研究对象,考虑到裂纹在整个画幅中的占比极小,且数据信噪比较低,直接分割会受到背景噪声的严重干扰,提出“先检测、后分割”的两步法方案:首先利用YOLO算法对裂纹进行检测,确定裂纹的感兴趣区域(ROI);随后对该感兴趣区域进行剪裁,将剪裁后的数据输入自主构建的Crack-UNet++模型中,实现裂纹的精准分割;最后基于分割结果,对裂纹的长度、宽度等参数进行量化分析。该方法有效提升了裂纹检测的效率与准确性,解决了人工分析时间成本高、主观性强的问题,为铝合金增材制造零件的质量控制与工艺优化提供了技术支撑。本研究将在会议中详细阐述,包括模型构建过程、实验验证结果及应用前景。
硫银锗矿型Li₆PS₅Cl(LPSC)虽被视为全固态锂金属电池的理想固态电解质,但其实验室测得的室温离子电导率(~10⁻⁵ S·cm⁻¹)远低于理论预期,且与锂金属负极存在严重的界面不稳定性,极大限制了其实际应用。异价掺杂虽是提升体相电导率的有效手段,但往往以牺牲界面稳定性为代价。因此,如何通过协同掺杂策略同步实现“体相高导电”与“界面高稳定”,是当前该领域亟待攻克的核心难题。针对上述挑战,本研究创新性地提出阴阳离子共掺杂协同调控策略。利用基于深度势能(DeePMD-kit)的机器学习分子动力学(MLP-MD)方法,我们构建了高精度的多体势函数。该方法突破了传统第一性原理计算在模拟时长与体系规模上的限制,使我们能够在纳秒级时间尺度上,同步解析体相离子传输机制与界面动态演化过程,为设计兼具高能量密度与高安全性的下一代固态电解质提供了关键理论依据与理性设计范式。
X 射线吸收谱,尤其是近边吸收结构,对吸收原子局域配位环境及电子结构高度敏感,是研究复杂材料局域结构的重要表征手段。面向“从结构到谱”的快速预测需求,本工作提出了一种基于机器学习原子间势模型的局域环境表征与吸收谱预测方法。
不同于直接采用图神经网络对原子结构进行端到端谱线预测的思路,我们首先利用预训练原子间势模型提取吸收中心附近局域环境的高维表示,再在此基础上建立吸收谱预测模型,从而更充分地编码局域几何构型、近邻相互作用及化学环境信息。以锂硫电池相关体系为代表性应用,我们比较了该方法与图神经网络基线模型在结构到谱预测任务中的表现。结果表明,在锂硫电池相关体系上,基于机器学习原子间势局域表征的方法表现出更优的预测精度。
总体而言,机器学习原子间势模型除可用于能量、力等性质建模外,也可作为特定体系中有效的局域结构表征工具。进一步地,该方法有望集成到我们自主开发的 XAS3DLive 在线分析平台中,与平台现有的交互式三维结构编辑、快速 XANES 计算和谱拟合功能相结合,服务于更高效的结构—谱预测与拟合分析。
会议室多学科228
https://meeting.tencent.com/dm/HDS2NKxYI0Gj
近几年随着人工智能技术的发展,机器学习(Machine Learning,ML),尤其是图神经网络(Graph Neural Networks,GNNs),为径迹重建提供了全新解决方案并取得显著结果。然而,当前相关研究仍面临关键瓶颈:公开数据集与评估指标尚不统一,现有数据集多面向强子对撞机的高堆积、高多重数环境设计,与$\tau$-粲物理实验所对应的低本底、低多重数、高精度测量需求存在显著差异,导致面向$\tau$-粲物理实验的专用数据集与评估指标长期缺失,严重限制了相关ML方法的公平对比与快速迭代。
针对上述难题,本工作构建了一套完整的数据集产生与径迹重建性能分析工作体系。首先通过单粒子与多粒子产生子完成蒙特卡洛(Monte Carlo, MC)模拟数据生成,并设计预处理流程构建机器学习径迹重建数据集;随后将机器学习寻迹结果通过外联式导入或内联式推理两种方式集成到 BOSS 框架,完成后续径迹拟合;最后开发了径迹重建性能分析算法,用于评估与可视化重建结果。
在此体系下,本工作发布了一套基于北京正负电子对撞机II(Beijing Electron Positron Collider II, BEPCII)上北京谱仪III(Beijing Spectrometer III, BESIII)漂移室(Multilayer Drift Chamber, MDC)的MC模拟数据集,包含单径迹与双径迹事例,双径迹事例又包含常规双径迹与近邻双径迹两类,涵盖不同样本种类、输入特征与多级标签定义、数据集规模划分,并提供了官方访问方式。为实现标准化、可对比的性能评估,本文针对径迹重建任务的核心需求,采用了一套专用评估指标,系统涵盖有无拟合两种情况下的径迹效率、克隆径迹率、假径迹率以及径迹参数分辨率等关键维度。
This talk presents a machine learning–based approach for automated defect detection in CMS HGCAL assembly. A hybrid framework combining supervised object detection (YOLO) and unsupervised anomaly detection (PatchCore) is developed to identify defects such as glue leakage and abnormal wire bonding. The method achieves strong performance in controlled conditions and demonstrates the advantage of combining known-pattern recognition with anomaly detection. Challenges such as rare defects and domain shift across production batches are also discussed.
本报告介绍了一种基于机器学习的CMS高粒度量能器(HGCAL)组装缺陷自动检测方法。该方法构建了一个融合监督学习目标检测(YOLO)与无监督异常检测(PatchCore)的混合框架,用于识别如胶水泄漏和异常bond等缺陷。在受控条件下,该方法表现出良好的性能,并展示了将已知模式识别与异常检测相结合的优势。同时,还讨论了实际应用中面临的挑战,例如异常样本稀缺以及不同生产批次之间的偏移问题。
会议室:多学科124
https://meeting.tencent.com/dm/haRz08kM4rJ7
本报告介绍人工智能在中国散裂中子源(CSNS)的最新应用进展:已发布 Dr.sai Rongzai Agent V1 版,支持基于 GSAS 的中子粉末衍射数据精修;目前完成框架升级,并集成 SasView 单模型拟合工具,即将推出V2 版本。同时,我们搭建了中子数据智能分析平台,集中部署自主研发的AI工具,用户可登录平台在线处理与分析实验数据。此外,在布拉格吸收边分析方面,已构建专用数据库,AI模型关键参数预测准确率达97%,下一步将结合真实实验条件,实现材料内部残余应力分布的预测。
多学科122
https://meeting.tencent.com/dm/tXHCYc42y0Nc
现代大型物理实验装置对系统的稳定性和容错能力提出了前所未有的严苛标准。BEPCII-U升级后对撞能量达2.35GeV,超导腔数量翻倍,高频系统集成度大幅提升。故障发生率与类型复杂度同步上升,依赖专家经验的人工诊断逐渐无法满足效率需求。
针对上述挑战,研制基于人工智能的智能故障诊断系统,实现故障的快速定位与预警,对保障实验顺利进行具有重要工程价值。
面向高能同步辐射光源(HEPS)构建的智能助手(HEPSBot)的发展现状、核心功能及未来规划。HEPSBot作为聚焦HEPS用户核心实验需求的智能服务助手,贯穿实验全流程、覆盖用户全场景,为HEPS用户提供一体化、智能化的全链条支撑服务。
We present NRS_Agent, a multi-agent data analysis system for Nuclear Resonant Scattering (NRS) experiments, developed on the Dr. Sai framework. The system is designed to accelerate and standardize end-to-end analysis workflows for both coherent nuclear resonant scattering and nuclear resonant inelastic X-ray scattering, reducing manual effort and operator-dependent variability. Its primary application is rapid preparation and execution of reliable analysis runs, including automated generation and validation of required parameter sets. A key advantage is high-throughput, parallel preparation of multiple analysis configurations, enabling users to explore alternative physical models and fitting assumptions efficiently. The platform provides broad parameter control improving reproducibility and facilitating consistent comparisons across samples or measurement conditions. It also supports robust conversion of detector data to analysis-ready formats, enabling faster turnaround from raw measurements to interpretable results. Integrated file inspection and quality checks promote early detection of formatting and metadata issues, minimizing failed runs and wasted beamtime. Automated visualization of analysis outputs enhances transparent reporting and iterative refinement during analysis. Overall, NRS_Agent targets time savings, scalability, and reproducible decision-making for experimentalists working with NRS datasets. The project is approaching initial user acceptance testing, and we discuss current capabilities and near-term deployment in experimental workflows.
多学科228
https://meeting.tencent.com/dm/jwC9XBrOmDnU
Variational quantum algorithms (VQAs) are leading strategies to reach practical utilities of near-term quantum devices. However, the no-cloning theorem in quantum mechanics precludes standard backpropagation, leading to prohibitive quantum resource costs when applying VQAs to large-scale tasks. To address this challenge, we reformulate the training dynamics of VQAs as a nonlinear partial differential equation and propose a novel protocol that leverages physics-informed neural networks (PINNs) to model this dynamical system efficiently. Given a small amount of training trajectory data collected from quantum devices, our protocol predicts the parameter updates of VQAs over multiple iterations on the classical side, dramatically reducing quantum resource costs. Through systematic numerical experiments, we demonstrate that our method achieves up to a 30x speedup compared to conventional methods and reduces quantum resource costs by as much as 90\% for tasks involving up to 40 qubits, including ground state preparation of different quantum systems, while maintaining competitive accuracy. Our approach complements existing techniques aimed at improving the efficiency of VQAs and further strengthens their potential for practical applications.
The CEPC is a proposed high luminosity e+e− collider designed for precision measurements of the Higgs, W, and Z bosons. Its reference detector incorporates a long bar crystal ECAL, which employs long, narrow crystal bars arranged in orthogonal layers to deliver fine 3D shower imaging and excellent compatibility with Particle Flow reconstruction. [1]
For CEPC physics analyses, large volumes of simulated data are essential. Calorimeter simulation is by far the most CPU intensive component of the CEPC detector simulation, accounting for roughly 80% of the total simulation budget. Consequently, the development of fast simulation techniques is a critical R&D priority.
Our work is inspired by CERN’s CaloDiT-2 [2], which develops a fast simulation framework based on the Diffusion Transformer (DiT). Our implementation, named Voxel Diffusion Transformer for Calorimeter (VoDiT4CAL), is built using PyTorch [3] and Lightning [4]. Building on the design principles of CaloDiT-2, VoDiT4CAL introduces two key enhancements:
Enhanced Local Spatial Modelling: VoDiT4CAL incorporates PixelDiT [5] layer to better capture local spatial correlations, which reduces the DiT depth and significantly lowers computational cost.
Enhanced Energy Modelling: VoDiT4CAL adds energy prediction head and dynamically redistributes energy across voxels.
Testing shows that VoDiT4CAL accurately reproduces key photon shower distributions across incident energies from 0.25 GeV to 100 GeV, meeting CEPC physics precision requirements. This contribution also presents a detailed report on distillation(for accelerating inference [6]), its impact on physics performance, and the practical speedup achieved after integrating VoDiT4CAL into the official CEPC software framework.
[1] Souvik Priyam Adhya et al. “CEPC Technical Design Report - Reference Detector”. In: (Oct. 2025). arXiv: 2510.05260 [hep-ex].
[2] Piyush Raikwar et al. “A Generalisable Generative Model for Multi-Detector Calorimeter Simulation”. In: (Sept. 2025). arXiv: 2509.
07700 [physics.ins-det].
[3] Adam Paszke et al. “PyTorch: An Imperative Style, High-Performance Deep Learning Library”. In: Advances in Neural Information
Processing Systems 32. Curran Associates, Inc., 2019, pp. 8024–8035. URL: http://papers.neurips.cc/paper/9015-pytorch-
an-imperative-style-high-performance-deep-learning-library.pdf.
[4] William Falcon and the PyTorch Lightning team. PyTorch Lightning. 2024. DOI: 10.5281/zenodo.13254264. URL: https://doi.
org/10.5281/zenodo.13254264.
[5] Yongsheng Yu et al. “PixelDiT: Pixel Diffusion Transformers for Image Generation”. In: arXiv preprint arXiv:2511.20645 (2025).
[6] Kaiwen Zheng et al. “Large Scale Diffusion Distillation via Score-Regularized Continuous-Time Consistency”. In: ArXiv abs/2510.08431
(2025). URL: https://api.semanticscholar.org/CorpusID:281950486.
多学科122
https://meeting.tencent.com/dm/tXHCYc42y0Nc