Lenovo has introduced the ThinkEdge SE100, an entry-level AI inferencing server, designed to make edge AI reasonably priced for enterprises in addition to small and medium-sized companies.
AI programs should not usually related to being small and compact; they’re large, decked out servers with a lot of reminiscence, GPUs, and CPUs. However the server is for inferencing, which is the much less compute intensive portion of AI processing, Lenovo acknowledged. GPUs are thought-about overkill for inferencing and there are a number of startups making small PC playing cards with inferencing chip on them as an alternative of the extra power-hungry CPU and GPU.
This design brings AI to the information somewhat than the opposite method round. As a substitute of sending the information to the cloud or knowledge heart to be processed, edge computing makes use of gadgets situated on the knowledge supply, lowering latency and the quantity of information being despatched as much as the cloud for processing, Lenovo acknowledged.