+62 61 7330092

Category Archives

9 Articles

Pilih Dedicated Server atau Colocation Server untuk Perusahaan?

Pilih Dedicated Server atau Colocation Server untuk Perusahaan?

Bisnis yang serba online saat ini membuat kebutuhan akan server email, website, database dan aplikasi hosting sebagai penunjang bisnis perusahaan menjadi sangat penting.
Banyak provider hosting terbaik di indonesia yang menyediakan server hosting diantaranya adalah Colocation Server dan Dedicated Server. Dua hal ini merupakan sesuatu yang sangat berbeda antara satu dengan yang lainnya dan tidak sedikit dari kita yang terkadang bingung dalam pilihan mana server yang Terbaik untuk penunjang bisnis perusahaan, apakah colocation server terbaik atau dedicated server terbaik?

Agar tidak bingung maka berikut ini PT RackH Lintas Asia sebagai salah satu Provider Hosting Terbaik di Indonesia coba berikan informasi mengenai Perbedaan colocation server indonesia dengan dedicated server indonesia.

Apa itu Colocation Server Indonesia?
Colocation Server Indonesia adalah layanan penitipan server yang diletakkan di Data Center terbaik indonesia dan perangkat server disediakan oleh Pelanggan.

Apa kelebihan dan kekurangan Colocation Server Indonesia?
Colocation Server Indonesia membawa banyak kelebihan dan kekurangan.
Berikut ini adalah kelebihan dari Colocation Server Indonesia:
– Biaya tagihan colocation server yang lebih murah dari dedicated server
– Bandwidth Unmetered yang berarti tidak ada batasan dalam penggunaan kuota
– Bandwidth 1 Gbps berarti memiliki kecepatan internet tinggi untuk diakses dari seluruh indonesia dan dunia
– Bebas dan dapat menentukan sendiri jenis/type dan spesifikasi server yang ingin digunakan
– Bebas dan dapat menginstal OS, software dan aplikasi apa pun sesuai kebutuhan yang diinginkan
– Memiliki alamat IP Dedicated
– Dapat melakukan kontrol penuh atas sistem, backup dan firewall server
– Dapat dukungan teknis dan layanan 24 jam
– Dapat melakukan peningkatan Fleksibilitas, Skalabilitas layanan yang digunakan
– Dapat layanan redundansi dan keandalan, mendapatkan jaminan uptime network yang tinggi
– Dapat jaminan kapasitas daya Listrik/PSU/Generator Listrik yang andal untuk server

Berikut ini adalah daftar kekurangan dari Colocation Server Indonesia:
– Kerusakan perangkat server adalah menjadi tanggung jawab kita sebagai pemilik server
– Adanya biaya pembelian server di awal yang tinggi
– Kesulitan untuk menemukan data center terbaik di indonesia karena hanya berada pada ibu kota atau kota besar
– Kebutuhan untuk mempekerjakan Staff IT Khusus dengan keahlian mengelola server hosting
– Adanya kebutuhan perjalanan dari kantor ke data center colocation untuk melakukan pekerjaan off-site di server

Demikianlah setelah kita ketahui apa itu colocation server indonesia maka berikut ini adalah tentang apa kelebihan dan kekurangan Dedicated Server Indonesia:

Apa itu Dedicated Server Indonesia?
Dedicated Server Indonesia adalah layanan penyewaan server yang diletakkan di Data Center terbaik indonesia dan perangkat server disediakan oleh Provider Hosting.

Kelebihan dari Dedicated Server Indonesia:
– Kerusakan perangkat server adalah menjadi tanggung jawab Provider Hosting sebagai pemilik server
– Dapat bantuan Instalasi OS, Software dan aplikasi apa pun pada dedicated server yang disewa
– Bisa mendapatkan layanan Full Managed Server GRATIS dari Provider Hosting
– Bandwidth Unmetered yang berarti tidak ada batasan dalam penggunaan kuota
– Bandwidth 1 Gbps berarti memiliki kecepatan internet tinggi untuk diakses dari seluruh indonesia dan dunia
– Memiliki alamat IP Dedicated
– Dapat melakukan kontrol penuh atas sistem, backup dan firewall server
– Dapat dukungan teknis dan layanan 24 jam
– Dapat melakukan peningkatan Fleksibilitas, Skalabilitas layanan yang digunakan
– Dapat layanan redundansi dan keandalan, mendapatkan jaminan uptime network yang tinggi
– Dapat jaminan kapasitas daya Listrik/PSU/Generator Listrik yang andal untuk server

Berikut ini adalah daftar kekurangan dari Dedicated Server Indonesia:
– Adanya biaya sewa server bulanan yang lebih tinggi dari colocation server
– Kebutuhan untuk mempekerjakan Staff IT Khusus dengan keahlian mengelola server hosting

Demikianlah saat ini kita sedikit banyak sudah ketahui kelebihan dan kekurangan antara layanan Colocation Server Indonesia dan Dedicated Server Indonesia,
lalu layanan server hosting mana yang lebih cocok sebagai penunjang bisnis perusahaan kita?
Jawabnya adalah antara layanan colocation server indonesia dan dedicated server indonesia memiliki kelebihan dan kekurangan masing-masing, jika kita konsentrasi terhadap biaya maka disarankan untuk memilih layanan dedicated server indonesia, namun jika tidak ada masalah dengan biaya maka lebih baik memilih layanan colocation server indonesia karena server bisa dianggap sebagai investasi perusahaan.

Demikian sharing yang kita berikan, jika ada pertanyaan atau kebutuhan Layanan Colocation Server Murah atau Dedicated Server Hosting bisa mengunjungi website https://www.rackh.com/ atau telp ke +62 21 3041 9200 dan WhatsApp ke +62 811 720800

How to Choosing a Colocation Server Provider in Indonesia

What Should You Do Before You Search for a Colocation Server Provider?

It costs a lot to pick a provider, move in and become operational. If you make the wrong choice, the costs of moving out later will be much higher than the costs of moving in.

Before you sign a contract with a provider, you must begin with the end in mind. Here are some questions that will help you determine your colocation requirements:

What do you want to achieve? Understand not only your day-one requirements, but also what you want to achieve in the future (e.g. transitioning to the cloud, going to a high-density environment, using managed services, etc.).

What are your business and IT requirements? For example, determine if you need to replicate and/or archive your data. Do you need additional colocation data center locations for replication and data archiving?

What does your growth plan look like? Your colocation provider should help you scale in space and power, so you can move in different directions in the future.

The average colocation contract is one to five years, while a wholesale colocation contract ranges between five and 15 years. Even if you have a short term contract, you don’t want the costs, hassle and potential downtime related to changing data centers every three years. That is why it is vital to choose a provider who not only meets your current needs but who can also meet your future requirements.

Here are 10 things to look for in a data center colocation provider:

1. The power density to support both current and future technologies.

Data center power densities have been steadily increasing due to newer technologies and many clients require up to 10 kW per cabinet. However, not many colocation providers offer the power densities to support future technologies. A majority of colocation data centers were constructed prior to the density “boom” and can only support an average of 4 kW per cabinet. In order to support high-density environments vendors will distribute the load over a larger footprint or have to implement a supplemental cooling system. Both of these approaches address the power density issue, but increase the cost of the contract.

2. Flexible master service agreement (MSA) and service level agreements (SLAs).

Contracts are written to protect the provider. Be proactive in addressing contractual and SLA items that are critical to the business. Initial discussions will provide insight to vendors that are willing to include or revise verbiage to better protect the client. One crucial mistake we see is waiting until the selection of the vendor to address contractual or SLA concerns instead of including them within the search criteria.

3. Network carrier redundancy.

A corporation-owned facility is usually limited to a small number of network carriers that provide telecommunications services. Look for colocation facilities that are carrier neutral and have a variety of network carriers that can provide connectivity within the facility. Various network carrier options will allow for a competitive pricing situation to reduce costs and provide the ability to incorporate a redundant vendor network design. It is also important to understand which providers are “lit” within the building and identify others that can provide services through a connection from a carrier hotel. The carrier hotel will increase the provider options to the client, but will add costs associated with the additional cross-connect. Finally, understand the redundancy associated with the routing to and within the colocation facility.

4. High-density environments.

More floor space doesn’t equal a better data center. If you can be efficient and fit the same amount of equipment in a smaller space, you’ll reduce your operating costs. Forsythe analysis shows non-recurring costs (NRC) can be reduced by approximately $10,000. Monthly recurring costs (MRC) can be reduced by $3,000 for each cabinet consolidated.

5. The right location.

Many companies are transitioning to “lights out” facilities so they can manage everything remotely. Decide how far you want to be from your data center. Does it need to be located far away from natural disasters such as hurricanes and tornadoes? A “lights out” facility is the most risk-adverse and cost-effective option. However, the farther you are from your data center, the greater your networking costs. Choosing a data center that is close to home makes it easy to respond to issues. For fast disaster recovery, your replication data center should be located no more than 50 to 100 miles from your primary data center.

6. High levels of physical security.

Ideally, you should have multiple levels of physical security both inside and outside the data center. Before you select a colocation provider, ask what perimeter and external areas are covered by camera. Also understand vendor security procedures and if you can add your own security cameras within their space.

7. Alignment with disaster recovery and business continuity plans.

The power, cooling and networking in your replication data center should match — or be better than — what you have in your primary data center. This will help you remain up-and-running during outages. Vendors should also have technical expertise and the capabilities to support clients with the development and testing of a disaster recovery plan. Also look for a data center that has workspace for your technical team to use for disaster recovery testing and declarations. Many providers don’t have this feature or they sell the workspace to multiple clients and hope that they do not declare a disaster at once.

8. Compliance.

When it comes to compliance, many colocation providers say they offer certain Tier availability and may look like Uptime-certified data centers but they aren’t really. Be wary of false claims and verify your provider’s certification with the Uptime Institute. Also, clients should verify that colocation facility is SSAE 16 compliant and that the provider will support 3rd party audits at no additional cost.

9. Transitional services.

At a minimum, your provider should offer a comprehensive managed service portfolio, so you won’t need to dispatch your technical staff to flip a switch. Also, it is critical that your provider can evolve along with your IT and business objectives over the duration of the contract. Taking advantage of managed services will free up your team’s time to support business objectives while ensuring that your infrastructure operates smoothly.

10. Future growth.

With technology changing so rapidly, it is impossible to know where you will be in three, five or 10 years. Look for a colocation provider who allows you to expand in areas such as power and space. If you’re considering additional services, ask if your colocation provider will let you modify your master service agreement as your business changes.

Moving to a new web hosting company is quite simple

As your WordPress site grows from a few visits a day to thousands of visits a month and more, the shared hosting account that provides its bandwidth, storage, and processing may be unable to keep up with the increased traffic. Either it will perform poorly for all visitors — slow page loads with frequent unresponsiveness. Or it will perform poorly during periods of peak traffic — it might stop working at all under especially heavy load. No one wants their site to collapse just when it’s at its most popular.

If a WordPress site’s hosting isn’t up to the job, there are two options. Upgrade to a more powerful hosting plan with the same hosting provider or move to a new hosting provider.

If you’re happy with your current host, upgrading is the simplest option, but if you aren’t happy, it’s time to consider making a move. Many bloggers and business site owners stick with their current hosting even when they aren’t happy with the support or the service they receive. They take the “better the devil you know” approach and assume moving to a new web hosting company is complicated. They also worry that it might hurt their SEO ranking or traffic.

In reality, moving to a new web hosting company is quite simple. First you find new hosting and migrate the site itself while the original site is still up and running. Then you change your domain records so that they point to the new site. Then you take the old site down and cancel your original hosting contract.

The only complications occur if you change the architecture of the site or the domain name as you make the change. Even that can be handled with a minimal impact, but I’ll stick to the simple case for this article.

Migrating The Site
Migrating a WordPress site is not difficult. A working WordPress site needs three things: web hosting, a WordPress installation (which is just a collection of files), and a database. Once you have a new web hosting account, you need to move the files into the root of your hosting account and import the site’s database.

If that sounds complicated, you might want to hire a WordPress professional to do it for you, but most decent web hosting companies will help you with the migration; many will do it for free.

If you want to make the move yourself, the best option is to use a plugin like All-In-One-Migration, which is capable of moving both the files and the database to your new hosting account.

All the plugin does is to copy the files to your new hosting account, export the database from the old site, and import it to the new WordPress installation. The same process can be done manually, but if you haven’t migrated a WordPress site before, I’d advise you to stick with one of the methods I’ve suggested.

Changing DNS Records
I’ve used the word “migrate,” but in reality the site has been copied. The new site works, but when users type the address into their browser or search for your site, they will end up on the old site.

To direct users to the new site, you need to change your domain name’s records so that it points to the IP of your new hosting account, rather than the old one.

There are two organizations involved in making a domain name work: the domain name registrar and the domain name host, although sometimes the same organization offers both services. The domain name registrar keeps the record of who owns the domain. The domain host manages the domain name servers that link your domain to an IP address.

When you change web hosting companies, you need to do one of the following:

Change the domain name servers that your domain registrar uses for your domain so they point to your new web hosting company’s DNS servers.
If you don’t want to use your web hosting companies name servers, change the DNS records for your domain with the DNS host you use.
In most cases, you’ll just want to visit your domain name registrar and use their interface to change the DNS server records so they point at your hosting company’s DNS servers. The domain name registrar or your web hosting company will be able to help you do this if you’re having trouble.

If you want to move your site without making any changes to its information architecture or the domain name, that’s about it. There will be no damage to your site’s search engine optimization, because from Google’s point of view, not much has changed. The content is the same, it’s accessible at the same URLs, and incoming links still work.

Don’t settle for a poor hosting experience. If you’re not happy with your hosting company, moving to a new provider is easier than you think.

Microsoft Donates $465 Million in Cloud Services in 2016

Brought to you by Talkin’ Cloud

Microsoft has made significant headway on its goal to provide $1 billion in cloud services for non-profits and researchers over three years, donating $465 million in cloud services to 71,000 organizations so far in its first year of the effort.

Microsoft Philanthropies was created just over a year ago “to realize the promise and potential of technology for everyone,” according to a blog post this week by Mary Snapp, corporate vice president of Microsoft Philanthropies.

See also: Microsoft’s Philanthropic Arm to Bring Cloud Services to 70,000 Organizations by 2017

Along with donating cloud computing, the efforts of the initiative have included delivering connectivity to remote schools, health clinics and community centers in 11 countries, and in the U.S. specifically, expanding access to computer science education to 225 high schools.

“If there’s a single technology that is making today’s technology-driven change possible, it’s cloud computing. Our ability to work from anywhere, at any time. The emergence of self-driving cars. Individualized medicine based on the analysis of a person’s genetics. All of these things are made possible by the cloud,” Snapp said. “But to realize the full potential of the cloud to create economic opportunity and address the world’s most difficult challenges, the power of cloud computing must be available to nonprofit organizations and researchers, and to individuals who lack affordable broadband access. Therefore, in January of last year, we announced a three-year initiative to donate $1 billion in cloud computing resources to 70,000 nonprofit organizations and 900 university researchers, and to expand broadband access in 15 countries.”

Snapp said that in 2017 Microsoft Philanthropies will continue to drive initiatives in education, increase support for its humanitarian action, and work to make technology more accessible for people with disabilities.

The promises build on a vision laid out by Microsoft general counsel Brad Smith at Microsoft Worldwide Partner Conference (WPC) in July, where he talked about the company’s role in building a “cloud for good.”

In an interview last year with The New York Times, Microsoft said it would not take a tax deduction for its donated cloud services.

Microsoft was named one of the 20 most charitable companies of the Fortune 500 last year, as was Google, who last month committed $11.5 million to support racial justice, split between 10 different causes.

In January, Google pledged $4 million in donations to the American Civil Liberties Union, Immigrant Legal Resource Center, International Rescue Committee and UNHCR in conjunction with President Donald Trump’s executive order on immigration.

Amazon Says Employee Error Caused Tuesday’s Cloud Outage

(Bloomberg) — Amazon.com Inc. said efforts to fix a bug in its cloud-computing service caused prolonged disruptions Tuesday that affected thousands of websites and apps, from project-management and expense-reporting tools to commuter alerts.

An Amazon Web Services employee working on the issue accidentally switched off more computer servers than intended at 9:37 a.m. Seattle time, resulting in errors that cascaded through the company’s S3 service, Amazon said in a statement Thursday. S3 is used to house data, manage apps and software downloads by nearly 150,000 sites, including ESPN.com and aol.com, according to SimilarTech.com.

“We are making several changes as a result of this operational event,” Amazon said in a statement. “While removal of capacity is a key operational practice, in this instance, the tool used allowed too much capacity to be removed too quickly. We have modified this tool to remove capacity more slowly and added safeguards to prevent capacity from being removed when it will take any subsystem below its minimum required capacity level.”

AWS is the company’s fastest-growing and most-profitable division, generating $3.5 billion in revenue in the fourth quarter. It’s the biggest public cloud-services provider, with data centers around the world that handle the computing power for many large companies, such as Netflix Inc. and Capital One Corp. Amazon and competitors like Microsoft Corp. and Alphabet Inc.’s Google are growing their cloud businesses as customers find it more efficient to shift their data storage and computer processes to the cloud rather than maintaining those functions on their own. Widespread adoption also increases the likelihood that problems with one service can have sweeping ramifications online.

Facebook and Google to Build Transpacific Submarine Cable

Brought to you by Data Center Knowledge

Facebook has partnered with Google to pay for construction of what will be one of the highest-capacity submarine cable systems stretching across the Pacific Ocean, connecting Los Angeles to Hong Kong.

This is a second such partnership Facebook has gotten involved in and yet another example of changes happening in the submarine cable industry, which has traditionally been dominated by consortia of private and government-owned carriers. Operators of mega-scale data centers who deliver internet services to people around the world – companies like Facebook, Google, Microsoft, and Amazon – have reached a point where their global bandwidth needs are so high, it makes more sense for them to fund cable construction projects directly than to buy capacity from carriers.

In May, Facebook announced it had teamed up with Microsoft on a submarine cable across the Atlantic, linking landing stations in Virginia Beach, Virginia, and Bilbao, Spain. The future transatlantic system, called MAREA, will be operated by Telefonica.

Both Europe and Asia Pacific are important markets for the internet and cloud services giants. The Los Angeles-Hong Kong cable will help improve connectivity between both companies’ data centers in the US and Asia.

The cable will be called Pacific Light Cable Network, taking its name from the third partner on the project: Pacific Light Data Communications.

Both MAREA and PLCN systems will be built by TE SubCom, one of the biggest names in the submarine cable industry.

In addition to simply increasing the amount of bandwidth between the US and Asia, the 120Tbps PLCN system will provide greater diversity in transpacific cable routes, Najam Ahmad, director of technical operations at Facebook, wrote in a blog post announcing the project. “Most Pacific subsea cables go from the United States to Japan, and this new direct route will give us more diversity and resiliency in the Pacific,” he explained.

The FASTER cable system, backed by Google and several Asian telecommunications and IT services companies, came online earlier this year. Another big submarine cable project is the New Cross Pacific Cable System, which is backed by Microsoft and a group of Asian telcos. NCP is expected to come online in 2017. Both will land in Oregon on the US side.

Also this year, Amazon Web Services made its first investment in a submarine cable project, agreeing to become the fourth anchor customer necessary to make the planned Hawaiki Submarine Cable between the US, Australia, and New Zealand possible.

One big way in which PLCN and MAREA will be different from traditional transoceanic cable systems is they will be interoperable with a variety of network equipment, rather than being designed to work with a specific set of landing-station technologies, according to Ahmad. Not only will each user be able to choose what optical equipment fits their needs best, they will be able to upgrade that equipment as better technology becomes available.

“This means equipment refreshes can occur as optical technology improves, including taking advantage of advances made during the construction of the system,” he wrote. “When equipment can be replaced by better technology at a quicker pace, costs should go down and bandwidth rates should increase more quickly.”

Many Yahoo users rushed on Friday to close their accounts and change passwords

Many Yahoo users rushed on Friday to close their accounts and change passwords as experts warned that the fallout from one of the largest cyber breaches in history could spill beyond the internet company’s services.

After Yahoo disclosed on Thursday that hackers had stolen the encrypted passwords and personal details of more than 500 million accounts in 2014, thousands of users took to social media to express anger that it had taken the company two years to uncover the data breach.

Several users said they were closing their accounts.

“We’re probably just going to dump Yahoo altogether,” said Rick Hollister, 56, who owns a private investigation firm in Tallahassee, Florida. “They should have been more on top of this.”

Due to the scale of the Yahoo breach, and because users often recycle passwords and security answers across multiple services, cyber security experts warned the impact of the hack could reverberate throughout the internet.

Several users said they were scrambling to change log-in information, not just for Yahoo but for multiple internet accounts with the same passwords. Accounts at banks, retailers and elsewhere could be vulnerable.

“I suppose a hacker could make the connection between my Yahoo and Gmail,” said Scott Braun, 47, who created a Yahoo email when he was setting up a shop on online retailer Etsy. “They both use my first and last name. Not being a hacker, I don’t know what their capabilities are.”

That concern was echoed in Washington. “The seriousness of this breach at Yahoo is huge,” Democratic Senator Mark Warner said Thursday. The company plans to brief Warner next week about the attack, his office said.

Yahoo has said that it believes that the breach was perpetrated by a state-sponsored actor.

SY Lee, a former Department of Homeland Security spokesman, said that would be of particular concern to the intelligence community, given the interest state-sponsored hackers have in compromising employees with security clearances.

The FBI had not issued specific guidance to its employees on handling their personal Yahoo accounts, a spokeswoman said.

British companies BT Group (BT.L) and Sky Plc (SKYB.L), which use Yahoo to host email for some of their broadband customers, said they were communicating with their users.

Yahoo urged users to change their passwords and security questions, but some said it would be easier just to give up their accounts because they rarely use them.

The company has been losing users, traffic and ad revenue in recent years and over the summer agreed to sell its core business for $4.8 billion to Verizon (VZ.N).


Yahoo is sued for gross negligence over huge hacking
Yahoo faces growing scrutiny over when it learned of data breach
Rachel, a 33-year-old from Newcastle, England, who asked Reuters not to use her last name, said she would be shutting down the Yahoo account she opened in 1999.

Furious that the company had not protected its customers’ data better, she said she thought this could be yet another blow for the email service, which has been overtaken in popularity by Google’s Gmail over the last decade.

But Cody Littlewood, who owns a start-up incubator in Miami Beach, was one of several users who said it was precisely because of the decline in the use of Yahoo’s services that they were not worried about the hack.

“Yahoo is only relevant for fantasy football. Worst case scenario, they get into my account and drop Jamaal Charles,” he said, a reference to the star Kansas City running back who regularly tops fantasy football rankings.

(Additional reporting by Dustin Volz; Editing by Cynthia Osterman)

How Facebook Made Its Data Warehouse Faster

Brought to you by Data Center Knowledge

Facebook says it’s time to change the way we compress digital files because the methods used in nearly all compression today were created in the nineties, when both hardware and the problems compression needed to solve were very different.

The company has created a new compression algorithm – addressing a similar problem the fictional startup Pied Piper became famous for solving in the TV series Silicon Valley – which has already resulted in a significant reduction of storage and compute requirements in Facebook data centers.

Facebook open sourced the algorithm, called Zstandard, this week, hoping it will replace Deflate, the core algorithm in Zip, gzip, and zlib – in other words the data compression standard that’s been reigning for more than two decades.

SEE ALSO: What Facebook Has Learned from Regularly Shutting Down Entire Data Centers

Zstandard enables faster compression that results in smaller files and allows for more options when balancing the essential compression tradeoffs of speed versus compression ratio. Another big advantage: it scales. When Deflate was created, the global scale of data center infrastructure at which internet giants like Facebook operate today was unimaginable.

Facebook’s new compression algorithm was created by Yann Collet, who started working on Zstandard’s earlier version, called lz4, in his spare time, while working in the marketing department of the French telco Orange. Last year, he moved from Paris to Menlo Park to work for Facebook.

He has made more than 1,350 commits to lz4 on Github and 400 to Finite State Entropy, a method for transforming symbols to bits during compression that’s alternative to the method used in Deflate. Finite State Entropy is instrumental to making Zstandard both faster and able to deliver better compression ratios, according to a blog post by Collet and his colleague Chip Turner, a software engineer at Facebook.

Compression at Exabyte Scale

The new algorithm has been running in production in Facebook data centers for the last six months, Collet said Tuesday during a presentation at the company’s @Scale conference in San Jose. “We don’t have to sacrifice compression,” he said. “We get more compression with speed.”

Facebook uses it to compress data stored in its Exabyte-scale data warehouses. A single Facebook data warehouse is thousands of server cabinets, according to Collet. At this scale, even a small optimization can turn into significant demand reduction.

“One percent of an Exabyte is still a huge amount of infrastructure,” he said. “So we pay attention to any efficiency gain we can get.”

Replacing zlib with Zstandard resulted in six percent storage reduction in Facebook data warehouses, 19 percent reduction in CPU requirements for compression and 40 percent reduction in CPU requirements for decompression, Collet said.

Taking Advantage of Modern Hardware

Unlike processors that were around when Deflate was created, which did one task after another, in order, modern CPUs are very good at doing many things at once, and Zstandard takes advantage of this capability, called parallel execution.

Another modern CPU trick it exploits is called branchless design. Put simply, if performing a task depends on the outcome of another task, instead of waiting for that outcome the processor tries to guess what the most likely outcome is and performs the task.

“Today’s CPUs gamble,” Collet and Turner write. “They do so intelligently, thanks to a branch predictor, which tells them in essence the most probable result.”

The problem is when they guess wrong, in which case the CPU has to stop all operations that were started in this speculative way and start from scratch. Called a “pipeline flush,” it is very costly in terms of CPU resources. Zstandard is designed specifically with such branchless algorithms in mind.

It also uses a lot more memory than zlib, which was limited to a 32KB window. Technically, there is no limit to the amount of memory Zstandard can take advantage of, but to make it compatible with lots of different receiving systems, Collet recommends limiting its memory usage to 8MB.

“This is a tuning recommendation, though, not a compression format limitation,” he and Turner write in the blog post.

Dropbox Haunted by 2012 Security Breach, More Than 68M User Logins Leaked

The 2012 breach of credentials from Dropbox involved over 68 million user accounts, according to new reports. Dropbox was not particularly forthcoming in its initial response to the breach, which it turns out must have represented most, if not all, of its user accounts at the time.

Motherboard reported on Tuesday that it had obtained files indicating that the credentials of 68,680,471 accounts had been leaked, and that their legitimacy had been confirmed by “a senior Dropbox employee.” The credentials have also since been verified independently by security researcher Troy Hunt, who found that the breached credentials included plain text email accounts and encrypted passwords.

SEE ALSO: Dropbox Said to Discuss Possible 2017 IPO in Talks With Advisers

Dropbox said last week in a blog post that users would be forced to reset passwords created before mid-2012 and not changed since. It also reiterated a recommendation that users enable two-step verification, but said that the measures were purely preventative.

Dropbox head of trust and security Patrick Heim said in a statement that there is no indication that any accounts have been accessed, and that the breached passwords were hashed and “salted” with extra characters.

“We can confirm that the scope of the password reset we completed last week did protect all impacted users,” Heim said. “Even if these passwords are cracked, the password reset means they can’t be used to access Dropbox accounts. The reset only affects users who signed up for Dropbox prior to mid-2012 and hadn’t changed their password since.”

Heim also warned users to be aware of the increased risk of phishing or spam, and to breached passwords reused on other sites.

Speaking to Cloud Tech, Kaspersky Lab principle security researcher David Emm lauded Dropbox’s preparation, saying that the password encryption, notice and advice to consumers represent a security step beyond defensive strategies. But in a report on ZDNet, the author says Dropbox’s messaging was off and didn’t convey the seriousness of the issue; “Dropbox, like so many other organizations, is presumably worried that users will be scared away by security breaches, so they soften the language. But experience and research show that when it comes to data breaches, owning up actually increases trust.”