Contactez-nous Suivez-nous sur Twitter En francais English Language

Freely subscribe to our NEWSLETTER

Newsletter FR

Newsletter EN



Molly Presley, SVP Global Marketing, Hammerspace: Momentum with Hyperscale NAS & Data Orchestration

April 2024 by Valentin Jangwa, Global Security Mag

Molly Presley, SVP Global Marketing, Hammerspace, met online along with A3 Communications, explains Hammerspace’s technologies and growth strategies.

Global Security Mag: Thank you for your time. Could you please, introduce yourself, and tell us about your journey at Hammerspace?

Molly Presley: I am the SVP Global Marketing at Hammerspace for just over 2 years. I was the first Marketing employee when I joined and I had to build the entire team and department from ground up, and it was a fun, jobwise. What makes Hammerspace interesting to someone like me who have been in the file storage space for decades literally, working for DDN (DataDirect Networks), Quantum, or Qumulo, and having left that space to join a Saas Company, is that Hammerspace managed to call me right away, offering me the great opportunity to work for a such exciting innovative solution to this market. The way we do things is so incredibly innovative that one of our customers, Blue Origin said, “That will change forever the way we use our unstructured data”. So, it is a fun to work for somebody that is not incremental but really a big change in a way data can be used and what companies are doing with that.

Global Security Mag: Can you tell us more about Hammerspace, its technologies and Growth, especially regarding what is new?

Molly Presley: The company was founded in 2018 with mission to redefine how data is used and preserved. The initial $20M was self-funded by David Flynn. In 2021, Data Orchestration and the Global Data Environment were introduced. We closed $57M Series A in July 2023. In November 2023, Hyperscale NAS (Network Attached Storage) proven at scale for Gen AI training and inference. Couple things were mentioned is last months. We did release a corporate momentum summary in a Press release February 7th. Even if we are a private company, we do like putting public information out there. In February 2024, we had already achieved 1,000% y-y Growth in revenue. 650% year-over-year increase in the capacity under management in the Hammerspace Global Data Environment software. 65% year-over-year growth in signed channel ecosystem partners. Single customer deployment that exceeds 100 petabytes of capacity. So, we have been growing very quickly. We are not a traditional Venture Capital (VC) technology company, it is more about organizations that are looking to include equity and be a part of company through an Initial Public Offering (IPO). It shows what we have in mind for our company.
We recently add Brian Pawlowski in our leadership team as the VP of Performance Engineering. Previously, he was the number 22 employee at NetApp as Chief Architect and CTO, VP and Chief Architect at Pure Storage, CTO at DriveScale, and Chief Development Officer at Quantum. So, well known in our industry. The reason to bring him into the company is that we do a lot of core operating systems for example Linux, to make the technology more accessible and easier to adopt. Because we are contributing to core industry, having fox like Brian is very valuable for our company.
Hammerspace has also acquired, last year, a French company called Rozo Systems, and Pierre Evenou who was the CEO now runs our French entity as the Head of Engineering.
We wanted to bring into our company the talented team from Rozo Systems including its engineers. We were also interested in buying its Mojette Transform, the patented erasure coding technology optimized for high-performance workload, for our Data Orchestration System. We have already integrated Rozo technology to the Hammerspace environment and portfolio, along with its customers. We will be announcing more about the other product integration pieces in coming months.
There is a shift occurring in the way data is being used.

If you think about unstructured data and you analyze a system like S3 Object Storage, we really focused in last 10 or 20 years on how you do build the capacity to store this quickly growing data and do it in a way that you can access it and with metadata knowing what you have. But what has shifted is that now organizations are trying to take that metadata and put it to work in computational model to use it. So, I am talking about enterprises or enterprises leaning on Service Providers.

They are starting to run into specific requirements and pain points based on the systems they are using in their IT.
When you are training and tunning models, where there is a Generative AI model or a traditional AI data analytics application, those have been designed to run using NFS data interfaces. So, there is a problem of the data being in S3 Object Storage whether it is a Private Object Storage company or a Public Cloud S3 company, and interface the data is what the applications want. Then you go on saying that a lot of those applications have been created in data silos and the way the industry has come up is to get it fast and have a solution for the High-performance data, low-cost Object for the other data and maybe something else for my user home directories, but the data are very siloed. And when you are trying to load it into an Analytics tool or a Gen AI data strategy, unify that data is extremely difficult. And then, the last piece is just performance capabilities.
What they are finding is that the existing Scale-up NAS Architecture (NetApp, or Qumulo…) can’t really fit the compute environment and help them achieve what they would like to.

To summarize, there are many challenges for the Next Data Cycle:
Train and tune effective models for business value where Models require standard NFS data interface.
Unstructured data for deep learning trapped in silos with the difficulty to access and unify data sources.
Performance to keep GPUs utilized, wherever they are with existing NAS and Object not designed for large compute performance.
So, there was a need for a New Storage Architecture, and this is where comes Hammerspace with Hyperscale NAS which is a fundamentally different NAS architecture urgently needed to power AI initiatives and the GPU computing boom.
Hammerspace is the first to use Parallel NFS (pNFS) with Flex Files using Metadata Services.
Hammerspace can make any storage GPU Direct.

Global Security Mag: Can you tell us about Hammerspace’s Unstructured Data Orchestration solution?

Molly Presley: Yes, we are focused largely on Orchestration from last year. The idea is unifying data silos. Let say you have a Cloud instance and Object Storage data. The first thing that our Orchestration does is lay our File System on top. The Global File System that sits on those and we ingest the metadata into our File System. Then the Orchestration policies are used to place the data where it needs to be for your business objectives. So, the Hyperscale NAS technology which we talk about is used in Orchestration because we use existing storage system or existing storage hardware raising the metadata to our system and the Orchestration places the data where it needs to be. Those types of business rules may be latency, performance, locality over GDPR, a lot of different reasons the customer might want the data in a specific spot. The Orchestration automates the process so that when your data is being placed on storage, it meets your business security, data privacy rules as well as your performance and cost attributes. That is a piece of it. And the Orchestration is also being used to make the data local to the compute or whatever compute environment you would like to use.
We have moved the intelligence out of band. Metadata updates, objectives policies.
We make the data path extremely fast isolating the intelligence out of band.

Global Security Mag: Can you summarize the key Hammerspace’s differentiators?

Molly Presley: Absolutely. Speed that you can scale to any number of GPUs, you can do it efficiently because you are taking network out-of-band, switches out, storages and controllers nodes out. There is a huge amount infrastructure that you need, also there is an efficiency component of less hardware, better utilization of the GPUs, fewer network ports and switches. This way, without overstating, most of our customers save one third of the networking cost, and half of network switches and ports, half the number of storage nodes needed to accomplish the scale.
Historic unstructured data storage is not what Hammerspace does, as the use case where we are focusing on is enterprises who are trying to get the architecture AI infrastructures, Analytics infrastructures, or do large scale computing.

Global Security Mag: What are your key messages to our readers?

Molly Presley: Generally speaking, as the data architecture has now become global, you may have hybrid cloud, remote users, multiple Data Centers, we can provide you with a great solution to unify all that data into a single Global Data File System, making it easy to use your preferred environment Data Centers, Clouds, as people also work remotely not having high-performance access their data wherever they are or wherever their applications and Data Centers sit. As you are embarking on potentially new architectures for AI and other Analytics projects, Hammerspace, thanks to its Hyperscale NAS and Data Orchestration solutions provides you with a great solution which not just only empower your multi locations and your remote users, but also brings the performance your organization needs.

See previous articles


See next articles

Your podcast Here

New, you can have your Podcast here. Contact us for more information ask:
Marc Brami
Phone: +33 1 40 92 05 55

All new podcasts