检测到您已登录华为云国际站账号,为了您更好的体验,建议您访问国际站服务网站 https://www.huaweicloud.com/intl/zh-cn
不再显示此消息
AI Inference Framework Add-on Introduction AI Inference Framework is a cloud native add-on for full lifecycle management of AI models.
Q: Does IVM Integrate AI Capabilities to Provide AI Services Externally? A: IVM can not only integrate AI capabilities, but also transparently transmit data to ISVs through APIs for closed-loop management of intelligent applications.
AI Inference Gateway Add-on With the rapid development of large language models (LLMs) and AI inference services, cloud native AI teams struggle with increasingly complex inference traffic management.
Full-Stack AI What Are the Requirements for Building an AI Cluster in the Full-Stack AI Scenario?
Kunpeng AI Inference-accelerated ECSs Kunpeng AI inference-accelerated ECSs are designed to provide acceleration services for AI services. These ECSs are provided with the Ascend AI Processors and Ascend AI Software Stack.
Handling AI Inspection Appeals After inspectors perform AI inspections, they can review the AI inspection appeals submitted by agents and update inspection results.
AI Feature Functions gs_index_advise(text) Description: Recommends an index for a single query statement.
Cloud Native AI Add-ons CCE AI Suite (NVIDIA GPU) CCE AI Suite (Ascend NPU) Parent Topic: Add-ons
AI Feature Functions db4ai_predict_by_bool (text, VARIADIC "any") Description: Obtains a model whose return value is of the Boolean type for model inference. This function is an internal function. You are advised to use the PREDICT BY syntax for inference.
Supported AI Engines for ModelArts Inference If you import a model from a template or OBS to create an AI application, the following AI engines and versions are supported.
AI Feature Functions gs_index_advise(text) Description: Recommends an index for a single query statement.
Return type: text ai_watchdog_detection_warnings() Description: Obtains the risk alarm information of the AI watchdog. The SYSADMIN or MONADMIN access permission is required.
Supported AI Engines for Inference If you import a preset image from a template or OBS to create a model, you can select the AI engines and versions in the table below.
AI Feature Functions gs_index_advise(text) Description: Recommends an index for a single query statement. Parameter: SQL statement string Return type: record hypopg_create_index(text) Description: Creates a virtual index.
Value range: N/A Example Requests Web AI search Q&A.
Service Deliverables AI Platform Implementation Service Service Deliverable AI Platform Implementation Service - Basic AI Training Platform Delivery Function List AI Inference Platform Delivery Function List AI Platform Function Recommendation Report AI Platform Implementation Service
Optimizing AI Performance Practice: Improving model performance through training optimization Parameter tuning policy: Adjust parameters such as the flash attention, parallel splitting policy, micro batch size, and recomputation policy.
File from JupyterLab to a Local PC Using MindInsight Visualization Jobs in JupyterLab Using TensorBoard Visualization Jobs in JupyterLab Previewing Markdown, PDF, and Math Formula in JupyterLab Parent topic: Using Notebook for AI Development and Debugging
Obtaining the Preset AI Frameworks Supported by a Training Job Function This API is used to query the list of preset AI frameworks supported by the current system. This API is used when you need to know the preset AI frameworks supported by the system.
Release History Table 8 AI Data Acceleration Engine add-on Add-on Version Supported Cluster Version New Feature Community Version 1.0.5 v1.28 v1.29 v1.30 v1.31 CCE standard and Turbo clusters support the AI Data Acceleration Engine add-on. 1.0.5 Parent Topic: AI Data Acceleration