The proposed research will develop novel distributed machine learning techniques for stable resource allocation and improving traffic estimation in networks. It is a well-known fact that networks are becoming complex and user demand is growing in many directions including the traditional demand for capacity and less delay, as well as improvements in Quality of Experience (QoE). Backhauling the multiplexed demand over the core networks calls for accurate traffic estimation. On the other hand, control of the resource allocation, based on such predictions, needs stable and robust solutions.
The goal of this research project is to identify ways to apply machine learning technology to help communication network operators cope with the vast amounts of data they must process to understand the health of their networks and to quickly resolve problems.
Ultra-reliable and low latency communication is increasingly an important aspect of future wireless communications. Specifically, in the context of mission critical communications for large-scale networks of sensors and actuators in automated and/or remote-control applications, low-latency wireless communication with high level of determinism is a vital element. The key performance indicators for such use case are in sharp contrast to the current broadband communications, since latency and reliability are paramount but lower data rates can be tolerated.
Increasingly, cyber threats evolve targeting companies, industries and governments. As defense systems are strengthening, threat actors developed new tactics, strategies and techniques to break down security perimeters. Generally, the security of the perimeters are enforced by multiples intrusion prevention and detection tools responsible to provide proactive insights, real-time insights and operational insights for the detection, prevention and mitigation of eventual threatening activities on the monitored system.
Join a thriving innovation ecosystem. Subscribe now