In the month of November, AWS announced at its “re: Invent conference” its two hundred forty-nine dollars DeepLens, which is basically a camera designed specifically for the developers who are engaged in designing and prototyping “vision-centric machine learning” paradigms. Although some months back the company began accepting pre-orders for the DeepLens, however, the camera would be getting shipped to the developers only.
DeepLens is a small computer based on Ubuntu- and Intel Atom with an in-built camera that is capable of running and evaluating “visual machine learning” paradigms. The DeepLens reportedly offers a performance of around one hundred sixty GFLOPS.
The hardware consists of all the basic I/O ports including USB 2.0, think Micro HDMI, and Audio out, for allowing developers to design prototype applications regardless whether they are ordinary toy app or industrial applications. The camera of four megapixels is not something amazing but it is appropriate for many cases. The DeepLens has heavy integration with the other services of the AWS.
These integrations would even help the developers to start using the camera easily. In case the users just wish to run a pre-built sample provided by the AWS, they could easily set up their DeepLens within ten minutes and station one among these paradigms into the camera.
The project templates reportedly consist of an object identification paradigm, which could differentiate between twenty objects, a style changing instance for rendering the image of the camera in van Gogh style, a face identification paradigm, and a paradigm, which could differentiate in between dogs and cats along with one which could identify around thirty different actions.
Nevertheless, all these are just the initials. According to the team of DeepLens, the developers who are new to machine learning, could easily grab the pre-existing templates and extend them. This could be because a project of DeepLens usually has 2 parts. The first one is the paradigm and the second is a Lambda function, which operates the samples of the paradigm thereby letting the developer act on the basis of the output of the paradigm.
The developers could get the DeepLens now on Amazon. Though it is a bit expensive, it is still the simplest path to begin creating applications powered by machine learning.