Su­pe­rior so­lu­tions for the IoT

DEMM Engineering & Manufacturing - - FRONT PAGE -


As in­for­ma­tion tech­nol­ogy and au­to­ma­tion tech­nol­ogy con­tinue to con­verge, cloud-based com­mu­ni­ca­tion and data ser­vices are in­creas­ingly used in in­dus­trial au­to­ma­tion projects. Be­yond the scope of con­ven­tional con­trol tasks, ap­pli­ca­tions such as big data, data min­ing and con­di­tion or power mon­i­tor­ing en­able the im­ple­men­ta­tion of su­pe­rior, for­ward-look­ing au­to­ma­tion so­lu­tions. New hard­ware and soft­ware prod­ucts from Beck­hoff for In­dus­try 4.0 and IoT en­sure the sim­plest pos­si­ble im­ple­men­ta­tion of such ad­vanced so­lu­tions.

In­dus­try 4.0 and In­ter­net of Things (IoT) strate­gies place strict re­quire­ments on the net­work­ing and com­mu­ni­ca­tion ca­pa­bil­i­ties of de­vices and ser­vices. In the tra­di­tional com­mu­ni­ca­tion pyra­mid point of view, large quan­ti­ties of data must be ex­changed be­tween field-level sen­sors and higher-level lay­ers in th­ese im­ple­men­ta­tions. How­ever, hor­i­zon­tal com­mu­ni­ca­tion be­tween PLC con­trol sys­tems also plays a crit­i­cal role in mod­ern pro­duc­tion fa­cil­i­ties. PC-based con­trol tech­nolo­gies pro­vide univer­sal ca­pa­bil­i­ties for hor­i­zon­tal com­mu­ni­ca­tion and have be­come an es­sen­tial part of present-day au­to­ma­tion projects ex­actly for this rea­son. With the new TwinCAT IoT so­lu­tion, the widely used TwinCAT 3 en­gi­neer­ing and con­trol soft­ware pro­vides the ideal foun­da­tional tech­nol­ogy for In­dus­try 4.0 con­cepts and IoT com­mu­ni­ca­tion. More­over, new IoT-com­pat­i­ble I/O com­po­nents from Beck­hoff en­able easy-to-con­fig­ure and seam­less in­te­gra­tion into pub­lic and pri­vate cloud ap­pli­ca­tions.


In­dus­try 4.0 and In­ter­net of Things (IoT) ap­pli­ca­tions do not start with just the un­der­ly­ing tech­nol­ogy. In re­al­ity, the work be­gins much ear­lier than this. It is crit­i­cally im­por­tant when im­ple­ment­ing IoT projects to first ex­am­ine the cor­po­rate busi­ness ob­jec­tives, es­tab­lish­ing the ben­e­fits to be gained as a com­pany from such projects. From an au­to­ma­tion provider per­spec­tive, there are two dis­tinct cat­e­gories of cus­tomers that can be de­fined: ma­chine man­u­fac­tur­ers and their end cus­tomers – in other words, the end users of the au­to­mated ma­chines.

In the man­u­fac­tur­ing sec­tor in par­tic­u­lar, there is an ob­vi­ous in­ter­est in re­duc­ing in-house pro­duc­tion costs, both through ef­fi­cient and re­li­able pro­duc­tion con­trol and also by re­duc­ing the num­ber of re­jects pro­duced. The tra­di­tional ma­chine man­u­fac­turer pur­sues very sim­i­lar ob­jec­tives, and above all is in­ter­ested in re­duc­ing the cost of the ma­chine while main­tain­ing or even in­creas­ing pro­duc­tion qual­ity. Op­ti­mis­ing the ma­chine’s en­ergy con­sump­tion and pro­duc­tion cy­cles, as well as en­abling pre­dic­tive main­te­nance and fault di­ag­nos­tics, can also be re­ward­ing goals. The last two points in par­tic­u­lar of­fer the ma­chine man­u­fac­turer a solid ba­sis to es­tab­lish ser­vices that can be of­fered to end cus­tomers as an ad­di­tional rev­enue stream. Of course, what both cus­tomer cat­e­gories ul­ti­mately want is for the ma­chine or prod­uct to be de­signed more at­trac­tively and to in­crease com­pet­i­tive­ness in the mar­ket­place.


The process data used dur­ing pro­duc­tion pro­vides a foun­da­tion for cre­at­ing added value and for achiev­ing above-men­tioned busi­ness ob­jec­tives. This in­cludes the ma­chine val­ues that are recorded by a sen­sor and trans­mit­ted via a field­bus to the PLC. This data can be an­a­lysed di­rectly on the con­troller for mon­i­tor­ing the sta­tus of a sys­tem us­ing the TwinCAT con­di­tion mon­i­tor­ing li­braries in­te­grated in the TwinCAT 3 au­to­ma­tion soft­ware, thereby re­duc­ing down­time and main­te­nance costs.

How­ever, where there are sev­eral dis­trib­uted con­trollers in pro­duc­tion ar­eas, it may not be suf­fi­cient to an­a­lyse data from a sin­gle con­troller. The ag­gre­gated data from mul­ti­ple or even all con­trollers in a pro­duc­tion sys­tem or a spe­cific ma­chine type is of­ten needed to per­form suf­fi­cient data anal­y­sis and make an ac­cu­rate an­a­lyt­i­cal state­ment about the over­all sys­tem. How­ever, the cor­re­spond­ing IT in­fra­struc­ture is re­quired for this pur­pose. Pre­vi­ous im­ple­men­ta­tions fo­cussed on the use of a central server sys­tem within the ma­chine or cor­po­rate net­work that was equipped with data mem­ory, of­ten in the form of a data­base sys­tem. This al­lowed anal­y­sis soft­ware to ac­cess the ag­gre­gated data di­rectly in the data­base in or­der to per­form cor­re­spond­ing eval­u­a­tions (Fig­ure 1).

Al­though such an ap­proach to re­alise data ag­gre­ga­tion and anal­y­sis in pro­duc­tion fa­cil­i­ties cer­tainly worked well, it pre­sented a num­ber of prob­lems at the same time, since the re­quired IT in­fra­struc­ture had to be made avail­able first. The fact that this gives rise to high hard­ware and soft­ware costs for the cor­re­spond­ing server sys­tem can be seen right away. How­ever, the costs with re­spect to per­son­nel should also not be over­looked: Be­cause of the in­creas­ing com­plex­ity in­volved in net­work­ing pro­duc­tion sys­tems, es­pe­cially with large num­bers of dis­trib­uted pro­duc­tion lo­ca­tions, skilled per­son­nel are nec­es­sary to suc­cess­fully per­form the im­ple­men­ta­tion in the first place. To com­pli­cate mat­ters, the scal­a­bil­ity of such a so­lu­tion is very low. Ul­ti­mately

the phys­i­cal lim­its of the server sys­tem are reached at some point, be it the amount of mem­ory avail­able or the CPU power, or the per­for­mance and mem­ory size re­quired for analy­ses. This of­ten re­sulted in more ex­ten­sive, man­ual con­ver­sion work if sys­tems had to be sup­ple­mented by new ma­chines or con­trollers. At the end of the day, the central server sys­tem had to grow along­side in or­der to ca­pa­bly han­dle and process the ad­di­tional data vol­ume.


Cloud-based com­mu­ni­ca­tion and data ser­vices now avoid the afore­men­tioned dis­ad­van­tages by pro­vid­ing the user with an ab­stract view of the un­der­ly­ing hard­ware and soft­ware sys­tems. “Ab­stract” in this con­text means that a user does not have to give any thought to the re­spec­tive server sys­tem when us­ing a ser­vice. Rather, only the use of the re­spec­tive ser­vices has to be con­sid­ered. All main­te­nance and up­date work on the IT in­fra­struc­ture is per­formed on the part of the provider of a cloud sys­tem. Such cloud sys­tems can be di­vided into pub­lic and pri­vate clouds.

The so-called pub­lic cloud ser­vice providers, such as Mi­crosoft Azure or Ama­zon Web Ser­vices (AWS), for ex­am­ple, pro­vide users with a range of ser­vices from their own data cen­tres. This starts with vir­tual ma­chines, where the ac­tual user has con­trol of the op­er­at­ing sys­tem and the ap­pli­ca­tions in­stalled on it, and stretches to ab­stracted com­mu­ni­ca­tion and data ser­vices, which can be in­te­grated by the user in an ap­pli­ca­tion. The lat­ter, for ex­am­ple, also in­cludes ac­cess to ma­chine learn­ing al­go­rithms, which can make pre­dic­tions and per­form clas­si­fi­ca­tions re­gard­ing spe­cific data states on the ba­sis of cer­tain ma­chine and pro­duc­tion in­for­ma­tion. The al­go­rithms ob­tain the nec­es­sary con­tents with the aid of the com­mu­ni­ca­tion ser­vices.

Such com­mu­ni­ca­tion ser­vices are usu­ally based on com­mu­ni­ca­tion pro­to­cols, which in turn are based on the pub­lish/ sub­scribe prin­ci­ple. This of­fers def­i­nite ad­van­tages from the re­sult­ing de­cou­pling of all ap­pli­ca­tions that com­mu­ni­cate

with one another. On one

(Fig­ure 1)

Newspapers in English

Newspapers from New Zealand

© PressReader. All rights reserved.