In the thesis, we have developed a hierarchical system for automation, which consists of a central server and connected clients. The operation of the two is mutual because they constantly exchange requests with latest states of connected devices. They can be managed through the user interface or by voice control on the client side. With the recognized speech, we can control specific connected devices and, as a result, get a response in the form of synthesized speech. Our final product is composed of web and client applications that can connect different types of external devices or sensors in the unified network. For initial test of the client application, we decided to use the Raspberry Pi computer, which enables connection of external devices to its built-in GPIO pins. Those devices can be managed manually via the user interface or by web application – user can set the time when the application should automatically activate the selected devices. In the development of the voice control interface, internet services Google Speech Recognition and Synthesis were used. Using recognized speech content, we can manage connected devices or send text to service Wolfram Alpha; it tries to find an appropriate answer, that can be played as a synthesized speech on client using connected speakers.
|