Blog

Security and Trust start with Hardware and Information

Over the past two decades and through evolutions of technology including the growing maturity of AI, growth of IoT, Blockchain and Quantum, we are consistently drawn back to the fundamental concept of Trust. It’s the evolution of our conversation on cybersecurity, our latest developments and our push for improved resiliency and safety as we connect digitally. Irrespective of whether we discuss Web3, Quantum Computing, Identity and Access or even Blockchain, to name a few, trust is the pervasive part of this conversation because it is the fundamental aspect of how we connect before we even begin to communicate. Yet, defining what is Trust, and importantly Zero Trust, has become a source of confusion in many ways.

As we look at John Kindervag’s Zero Trust framework as extended into NIST Zero Trust Architecture, we assume that none of the devices, identities, systems and users are trusted by default. As such, we need to continuously validate that identity of account or device, execution of the connection and the connection is still secure, after all, things change and often in real-time. As these changes occur, we need to continue to monitor and manage that trusted relationship even after the baseline connection is established via the initial authentication and authorisation.

If we look at tools we use to ensure security and resilience of our data and communications, or specifically their marketing, we are inundated by the confusion created around what that Zero Trust actually is and how it is managed and operated in the real world. AI and Behavioural Analytics messages seem to play a role in the conversation, as well as Identity Governance and so many other messages. However, they play only a part of what true security is and tend to focus less on the importance of hardware, cryptography and adaptable management capabilities as an integral part of that conversation.

Let’s unpack.

We have known throughout history that any one defence tends to be a minor speed bump in the attacker’s initiative. Success and actual results lie in the orchestration of how all defences play together to drive the outcome we really want, safety. Assets are valuable to us, but we don’t have to protect all of them the same way, what’s good for one is not necessarily effective for another. Generally speaking, the most effective defences for safeguarding data, in the market right now, leverage protections up and down the stack and deflect the attack to sandboxes to divert the attack or by logically segmenting endpoint hardware and prevent the attacker from leveraging their capabilities effectively. By segmenting assets and resources against rogue processes or even bad application code by applying preventative controls they effectively sandbox bad processes, resource misuse and applications trying to perform unauthorised actions.

Hardware is that integral element. One that’s been historically unmanaged, non-resilient and accessible to well-resourced attackers. Now there is a growing industry push for hardware to be more connected, more secure and more resilient. Naturally, it means that hardware needs to be managed, monitored and updateable, even at the processor and sub-controller level. How else will we respond to zero-day and APT threats as they evolve towards other “softer” targets easier to attack like the CPU, Board, Wireless card or Memory. Keep in mind, we are constantly creating those softer targets by defending our assets better and forcing the attacker to pivot their attacks to what we forgot to defend in our rush to plug higher priority holes in our security.

With that continually increasing and evolving attack surface against hardware, spurred on by a greater number of IoT, Endpoint, Edge and AI devices and their increased compute capability, we need a method of better aggregating data to manage these assets at a much more granular level.

By aggregating data, we have an opportunity to process it into better information that drives actionable insights at scale to inform our understanding of use, our ability to be compliant, and of course our defences.

As we look to have comprehensive and scalable security, tools like BlockAPT are there to defend systems and information by enforcing Quantum Resilient protection and data aggregation from other security devices. These tools work in concert with ZTNA from companies like Cyolo, when we access to our vital data, or GarbleCloud for quantum resilient data protection. When these are superpowered by secure data processing and persistent compliance mechanisms developed by LinkTempo to address the growing informational needs well beyond the security tools alone, they combine to become our informational footprint.

Share:

Konstantin Vilk, CEO and Founder of LinkTempo, is an Information Systems Executive who is an experienced CEO, COO, CISO, CTO. He has created and sold award-winning companies in addition to teaching university-level courses in information technology. He is passionate about helping companies create incredible products and has extensive experience in Quantum Computing, Quantum Cryptography, Artificial Intelligence, Cyber Security, Cloud, Innovation, Marketing, Sales, M&A and implementation of enterprise-wide systems. Prior to founding LinkTempo, Kosta founded QuSecure, an award winning and fast-growth Quantum Computing Cyber Security company and Quantum Thought, a premier quantum computing launchpad for the founding generation of quantum computing companies. He also created an award-winning company named I-Span which became a leading partner in the transformation of corporate systems to the Cloud. After his exit from I-Span he headed Technology and Cyber Security in the financial services sector. He is recognized as one of the top CTOs in the quantum computing field and a thought Konstantin Vlik leader in quantum computing, data management and compliance.

You might also like