May 6, 2022 - Best 10 Microservice Best Practices

10 Microservice best practices for your projects

The 80/20 rule is all about focusing on the important things while ignoring everything else. The same may be applied to microservices deployments, which we’ll discuss further down.

1. Improve productivity with Domain-Driven Design(DDD)

Microservices, ideally, should be designed around business capabilities using DDD. It enables high-level functionality coherence and provides loosely coupled services.

There are two phases to every DDD model: strategic and tactical. The strategic phase ensures that design architecture encapsulates business capabilities. The tactical phase, on the other hand, allows the creation of a domain model using different design patterns.

Entities, aggregates, and domain services are some of the design patterns that might help you design loosely coupled microservices.

Learn how SoundCloud reduced release cycle time with DDD.

SoundCloud’s service architecture followed the Backend for frontend(BFF) pattern. There were, however, complications and concerns with duplicate codes. Furthermore, they used the BFF pattern for the business and authorization logic in the BFF pattern, which was risky.

As a result, they decided to use a Domain-driven design pattern and develop a new approach known as “Value Added Services (VAS).”

There are three service tiers in VAS. The first one is the Edge layer, which functions as an API gateway. The value-added layer is the second, which processes data from different services to provide a rich user experience. Lastly, the “Foundation,” which provides the domain’s building blocks, is the third layer.

Within the DDD pattern, SoundCloud employs VAS as an aggregate. VAS also enables the separation of concerns and provides a centralized orchestration. It can execute authorization and orchestrate calls to associated services for metadata synthesis.

Using the VAS approach, SoundCloud managed to decrease release cycles while improving team autonomy.

2. Have quicker responses with the Single Responsibility Principle (SRP)

SRP is a microservice design principle in which each module or class is required to do one thing assigned well enough for enhanced functionality. Each service or function has its own set of business logic that is specific to the task at hand.

One of the major benefits of SRP is reduced dependencies. There is no overhead on any service because each function is designed to perform specific tasks. It also reduces the response time by eliminating the need for a service to wait for support services to execute before responding to a user request.

Example of how Gojek achieved higher reliability and lower response time using SRP

Gojekis an on-demand marketplace that connects motorcycle drivers and riders. One distinct feature is a chat capability that allows users to engage with drivers via the “Icebreaker” application.

The Icebreaker’s heavy reliance on services to build a communication channel between users and drivers was a hurdle for Gojek. To create a channel, for example, the icebreaker will need to perform tasks like,

  • Authorize API calls
  • Fetch customer’s profile
  • Fetch driver’s details
  • Verify whether the customer-driver profile matches the active order details
  • Create a communication channel

Icebreaker pattern

The problem with dependency on several functions/services is that if one of them fails, the entire chat function will fail.

Icebreaker error response

Gojek teams applied the single responsibility principle to addd services for each function, assigning tasks to each and reducing the load on a single service.

  • API call function- They added Kong API gateway for authentication functionality

API Gateway pattern

  • Profile retrieval-Icebreaker needed chat tokens from driver’s and user’s data stored in separate databases, which were then stored in the Icebreaker’s data store. As a result, there was no need to perform any further retrieval in order to create the channel.
  • Active booking- Icebreaker needed to verify whether an active booking had already been made when a user hits the channel creation API. The Gojek team used the worker-server approach to reduce this dependency.

Icebreaker worker pattern

So, each time an order is placed, a worker creates a communication channel and stores it in the Redis cache, which a server pushes as per need each time a user hits the channel API.

As a result, Gojek reduced its response time by 95% with the single principle responsibility approach.

3. Enable service autonomy with independent microservices

Independent microservice is a practice of taking the service isolation to a step further. Through independent microservice practices, three forms of independence can be obtained.

  1. Independent service evolution is a process of isolating feature development based on the need for evolution.
  2. Independent testing is a process of conducting tests that prioritize and focuses on the service evolution. This decreases the number of test failures caused by service dependencies.
  3. Independent deployment reduces the chances of downtime due to service upgrades.. It is beneficial, especially if you have a cyclic dependency during the app’s deployment.

Amazon’s single-purpose functions and independent microservices’ management issues

Back in 2001, developers at Amazon found it difficult to maintain a deployment pipeline with a monolithic architecture. So, they chose to migrate to a microservice architecture, but the real challenge began after the migration.

Amazon teams pulled single-purpose units and wrapped them with a web interface. It was undoubtedly an efficient solution, but the difficulty in managing several single-purpose functions was the issue.

Merging the services them week after week for deployment became a massive challenge as the services increased. Hence, development teams at Amazon built “Apollo,” an automated deployment system based on the decoupled services.

Further, they established a rule that all-purpose functions must communicate through a web interface API. They defined a set of decoupling rules that every function needs to follow.

Lastly, they had to deal with manual handoffs, which resulted in “deadtime.” Eventually, the deployment pipeline sequence was discovered to reduce manual handoffs for improved efficiency of the entire system.

4. Embrace parallelism with asynchronous communications

Without proper communication between services, the performance of your microservices can suffer severely. There are two communication protocols popularly used for microservices,

  • Synchronous communication is a blocking-type communication protocol where microservices form a chain of requests. Though it has a single point of failure and it lags in performance due to higher dependency and
  • Asynchronous communication is a non-blocking protocol that follows event-driven architecture. It allows parallel execution of requests and provides better resilience.

The asynchronous communication protocol is the better option for enhanced communication between microservices. It reduces the coupling between services during the execution of user requests.

How Flywheel Sports powered real-time broadcast with enhanced microservice communication?

Flywheel Sports was launching “FlyAnywhere,” a platform for their fitness community that enhanced the bike riding experience through real-time broadcast. Flywheel’s engineers built the platform using a modular approach and microservice architecture.

It did, however, have communication challenges, which resulted in network failures and system availability issues. To solve these issues, they created a checklist of features like,

  • Service discovery
  • Messaging
  • Queuing
  • Logging
  • Load balancing

They built “Hydra,” an internal library that supports the above features and Redis clusters. They tied each microservice to a shared Redis cluster and used pub/sub (asynchronous communications) to maintain inter-process communication.

Hydra module

Not just that, Hydra also assisted them in mitigating the single dependency issues of microservices by enabling features like,

  • Inter-service communication
  • Service discovery
  • Load balancing and routing
  • Self-registration of services with zero configuration
  • Job queues

Overall, we can conclude that asynchronous communication assisted Flywheel in establishing a solid foundation for providing real-time broadcasts to its FlyAnywhere consumers via Hydra.

5. Separate microservice database to reduce latency

Although microservices are loosely coupled, they all require to retrieve data from the same datastore with a shared database. In such instances, the database must deal with multiple data queries and latency issues. The solution might be a distributed database for microservices, with each service having a data store of its own can be the answer.

A separate microservice database allows services to store the data locally in a cache. Latency is reduced as a result of this. Because there is no single point of failure with a distinct data store, security and resilience are also improved.

Twitter’s ability to process millions of QPS was improved through a dedicated microservice datastore.

Twitter migrated from monolithic software architecture to a microservices architecture in 2012. It utilized multiple different services, including Redis and Cassandra, to handle about 600 requests per second. However, as the platform scaled, it needed a database solution that was scalable, resilient, and capable of handling more queries per second.

Monorail architecture

Initially, Twitter was built on MySQL and was further moved from a small database instance to a large one. This led to the creation of large database clusters, because of which moving data across instances was a time-consuming effort.

As a solution to this problem, Twitter incorporated several changes. One of which is the introduction of Gizzard, a framework that helped them create distributed datastores. It works as a middleware networking service and handles failures.

Further, Twitter added Cassandra as a data storage solution. Though Gizzard and Cassandra helped Twitter handle data queries, the latency problem persisted. They needed millions of queries per second with low latency in a real-time environment.

So, they created an in-house distributed database called ‘Manhattan” to improve latency and handle several queries per second. Twitter improved reliability, availability, and latency, among other things, with this distributed system. Furthermore, Twitter migrated data from MySQL to Manhattan and adopted additional storage engines to server different traffic patterns.

Another key aspect of Twitter’s dedicated database solution was the use of Twemcache and Redis. It helped them protect backing data stores from heavy read traffic.

A dedicated microservice datastore approach helped Twitter handle,

  • More than 20 production clusters
  • More than 1000 databases
  • Manage tens of thousands of nodes
  • Handle tens of millions of queries per asecond(QPS)

6. Containerize microservices to improve process efficiency

Containerization of microservices is one of the most efficient best practices. Containers allow you to package the bare minimum of program configurations, libraries, and binaries. As a result, it is lightweight and portable across environments.

Apart from this, containers share the kernel and operating system, which reduces the need for resources needed for the individual OS. There are many benefits that containerization can provide,

  • Isolation of process with minimal resources
  • Smaller memory footprint
  • Higher data consistency due to shared OS.
  • No impact of sudden changes of outside environment on containers
  • Optimized costs and quicker iterations
  • Rapid rollouts and rollbacks

Learn how Spotify migrated 150 services to Kubernetes to process 10 million QPS.

Spotify has been containerizing microservices since 2014; however, by 2017, they understood that their home-grown orchestration system, Helios, was insufficient to iterate quickly for 200 autonomous production teams.

Kubernetes for Spotify was the solution to Helios’ inefficiency. To avoid putting all of their eggs in one basket, Spotify engineers migrated some services to Kubernetes, which runs alongside Helios.

They used Kubernetes APIs and extensibility features for integration after a thorough investigation of core technical challenges. Further, Spotify accelerated the migration to Kubernetes in 2019, focusing on stateless services. They migrated over 150 services in order to handle 10 million requests per asecond.

7. Increase native UI capabilities with micro frontend

Micro frontend architecture is a method of breaking down a monolithic frontend into smaller elements. It follows the microservice architecture and enables individual UI element upgrades. With this approach, you can make changes to individual components and test and deploy them.

Further, micro frontend architecture also helps in creating native experiences. For example, it enables the usage of simple browser events for communication that are easier to maintain than APIs. Micro frontends improve CI/CD pipelines by enabling faster feedback loops. So, you can build a frontend that is both scalable and agile.

How Facebook improved web page latencies with BigPipe

In 2009, Facebook was having trouble with the frontend of its website and wanted a solution to reduce loading times. The traditional front-end architecture needed overlapping browser rendering and page generation optimizations. Such optimizations were the only solution to help them in reducing the latency and response time.

Which is why Facebook built a micro frontend solution called the BigPipe. This allowed Facebook to break the web page into smaller components called “pagelets.” Using BigPipe, Facebook has improved latency across browsers’ web pages.

Over time, modern micro front-end architectures have evolved from web pages to support use cases like web apps and mobile applications as awell.

Improved latency by BigPipe

8. Secure microservices for data protection

Microservices communicate with external services or platforms through APIs, and securing such communication is essential. Data can be compromised, and hackers can take control of core services and disrupt your app’s operations if you don’t use effective precautions. To give you a perspective, data breaches cost businesses more than $2.9 million every minute.

So, microservices security is critical to your business. There are many ways to secure microservices like,

  • SSL/TLS encryptions
  • Multi-factor authentications
  • Restricted data access
  • Web application firewalls
  • Vulnerability scanning
  • Penetration testing

OFX secured microservices with a middle-tier security tool

OFX is an Australian international financial transfer institution that processes more than $22 billion worth of transactions every year.
After migration to the cloud environment, OFX needed a highly secure solution to increase visibility and protection against cyber threats defined by the Open Web Application Security Project (OWASP).

OFX partners and external services communicate with the microservices through APIs in an internal network. Therefore, they needed to improve security and visibility to properly verify access requests from external platforms.

To tackle this, OFX deployed a security tool in the mid-tier environment with an agent on their web servers to have visibility of several aspects, including,

  • Detection of suspicious patterns
  • Monitoring of login attempts
  • Blocking malicious traffic
  • Extensive penetration testing

By adding a security tool in the middle-tier, security teams and cloud architects can track every interaction of APIs. It helped them detect anomalies and secure amicroservices.

9. Simplify parallel programming with immutable APIs

Microservices and immutability both share the idea of parallelism, which also helps in applying the Pareto Principle. In addition, parallelism allows you to accomplish more within less time.

Immutability is a concept where data or objects, once created, are not modified. Therefore, parallel programming is much easier, especially when using microservice architecture.
Understanding how immutable containers improve security, latency, and more.

Let’s consider a use case like an eCommerce web app that needs integration of external payment gateways and third-party services.

Integration of external services needs APIs for microservice communication and data exchange. Traditionally APIs are mutable and provide the power of creating several mutations as per need.

However, the problem with mutable APIs is susceptibility to cyber-attacks. Hackers exploit shell data access to inject malicious codes.

On the other hand, immutability with containerized microservices improves security and data integrity. You can simply remove the faulty containers with immutable containers instead of fixing them or upgrading them. In other words, immutable APIs can help your eCommerce platform secure users’ data.

Another key advantage of immutable API is parallel programming. A major caveat of concurrent programs is how changes in one thread impact other threads. It leads to complications for programmers who need to figure out the context in which their thread executes.

Immutable APIs solve this problem by restricting the side effects of one thread on others. It restricts the change of state no matter which version of an object a thread accesses. If a thread needs an altered version of an object, a new one is created in parallel. So, you can execute multiple threads in parallel, improving programming aefficiency.

10. Increase delivery speeds with a DevOps culture

DevOps is a set of practices that breaks the siloed operational and development capabilities for enhanced interoperability. Adopting DevOps can help your organization with a cohesive strategy and efficient collaboration, among other benefits.

DocuSign reduced errors and improved CI/CD efficiency with DevOps.

DocuSign introduced the e-signing technology using the Agile approach for software development. However, they soon realized that the lack of collaboration between individual teams resulted in failures.

DocuSign’s business model, which involved contracts and signatures, needed continuous integration. In addition, the exchange of signatures and approvals needs to be error-free, as a single misattribution can lead to severe problems.

So, DocuSign adopted the DevOps culture to improve collaboration between operation and development team members. Despite the cultural shift in the organization due to DevOPs adoption, the CI/CD problem persisted. They used application mock for internal APIs to support continuous integration.

The application mock tool offers a mock endpoint and response. DocuSign combines it with incident management and tests the app before release through simulations. It helped them quickly build, test and release applications through a cohesive strategy.

Simulations allowed them to test the app’s behavior in real-life scenarios, improve fault isolation and make quick changes. So, they could continuously test, integrate changes, and have continuous delivery cohesively.

80/20 the Simform way!

The 80/20 principle is all about reducing efforts and maximize gains. These microservice best practices can help you achieve maximum gains. However, which one to choose remains use case specific. Take an example of CrayPay, that needed an m-payment solution for retail payments.


This awesome post is was made by :

May 13, 2017 - Ransomware Fuck Society Solved Or Not


En referencia al ataque de Ransomware detectado inicialmente en España (Empresa Telefonica), con afectación masiva en equipos Windows de múltiples versiones, la cual utiliza una versión del malware WannaCry, sobre lo cual se añade la siguiente información:


Se han detectado ataques en 74 paises a rededor del mundo, siendo Rusia el principal afectado.

Fuente Laboratorio de Karpesky

Mitigaciones de alto nivel Recomendadas

Medidas Reactivas

  • Desconectar equipo de la red.

  • Aplicar herramientas actuales de Anti-Ransomware (en caso que estén disponibles) liberadas para cepas ya conocidas, como por ejemplo: HidraCrypt, Petya, etc.

  • Reportar a la brigada de cibercrimen este tipo de delitos para enviar la señal que este tipo de incidentes si son delitos y debe perseguirse las responsabilidades penales de los involucrados, al verse afectada la fe pública, los sistemas institucionales, y la privacidad de la información de ciudadanos.

  • Si la identificación del Ransomware ocurre mientras esta cifrando el disco, sacar disco, y buscar posibles la llave de cifrado para revertir el proceso.”

Medidas Preventivas:

  • Revisar si los equipos de la compañia tienen instalado el parche de actualización ms17_010 de Microsoft.

  • Detener el servicio SMB mediante políticas GPO

  • Detección de nuevos equipos en la red interna

  • Revisar las reglas de Firewall sobre comunicaciones hacia internet o redes no seguras sobre el puerto 445 (SMB). Bloquear en caso de sospecha.

  • Habilitar las reglas de SNORT, IDS e IPS sobre los indicadores del documento

Comportamiento del CyberAtaque

Nota: Al ser un ataque en progreso, no existe una completa certeza de como se desarrolla, sin embargo lo descrito en esta sección es producto del análisis y la información compartida entre centros de Ciberseguridad. Una vez mitigado el Ciberataque se realizarán todos los análisis forenses para detectar el origen y falla explotadas.

Se cree que el malware pudo haber infectado a las compañías por una vulnerabilidad en los equipos Windows en los servicios SMB (puerto 445) la cual una vez explotada permite tomar completo control del equipo de manera remota, y en este caso, descargar y ejecutar el Ransomware. Esta información es en base a lo comunicado por el CCN-CERT de España.

Esta vulnerabilidad fue parchada por Microsoft el 14 de Marzo del 2017 bajo el código ms17_010, de la mano de la filtración de las herramientas de la CIA por parte del equipo de Hackers Shadowbrokers. Esta filtración contenía los exploits necesarios para aprovecharse de esta vulnerabilidad, incluso con una interfaz gráfica para su facilidad de uso. Existen múltiples guías en internet que explicaban paso a paso (con fotos y videos) sobre como explotar esto.

La explotación de la vulnerabilidad es bastante sencilla y se realiza a través del protocolo SMB (Puerto 445) de las máquinas Windows utilizando la técnica de Eternalblue con Doublepulsar. Una vez explotada la vulnerabilidad e instalado el Backdoor se procede a descargar el ransomware y a realizar su infección.

Según el CCN-CERT de España el ransomware utilizado es el WannaCry, el cual una vez infectado el equipo encripta todos los archivos del disco duro y solicita una recompensa por ello la cual debe pagarse a través de Bitcoins y la red Tor.

Sistemas Afectados

Las siguiente versiones Windows que tengan el servicio SMB habilitado pueden verse afectados:

  • Microsoft Windows Vista SP2
  • Windows Server 2008 SP2 and R2 SP1
  • Windows 7
  • Windows 8.1
  • Windows RT 8.1
  • Windows Server 2012 and R2
  • Windows 10
  • Windows Server 2016

Contexto Técnico

A continuacion se describiran los diferentes aspectos técnicos del ataque, como vectores, vulnerabilidades explotadas, hashes, reglas de snort, etcétera.

Hashes del Malware

La siguiente tabla incluye las firmas de los diferentes versiones del Malware utilizado.

Tipo Hash
FileHash-SHA256 ed01ebfbc9eb5bbea545af4d01bf5f1071661840480439c6e5babe8e080e41aa
FileHash-SHA256 b9c5d4339809e0ad9a00d4d3dd26fdf44a32819a54abf846bb9b560d81391c25
FileHash-SHA256 2584e1521065e45ec3c17767c065429038fc6291c091097ea8b22c8a502c41dd
FileHash-SHA256 ed01ebfbc9eb5bbea545af4d01bf5f1071661840480439c6e5babe8e080e41aa
FileHash-SHA256 09a46b3e1be080745a6d8d88d6b5bd351b1c7586ae0dc94d0c238ee36421cafa
FileHash-SHA256 24d004a104d4d54034dbcffc2a4b19a11f39008a575aa614ea04703480b1022c
FileHash-SHA256 f8812f1deb8001f3b7672b6fc85640ecb123bc2304b563728e6235ccbe782d85
FileHash-MD5 509c41ec97bb81b0567b059aa2f50fe8
FileHash-MD5 7bf2b57f2a205768755c07f238fb32cc
FileHash-MD5 7f7ccaa16fb15eb1c7399d422f8363e8
FileHash-MD5 84c82835a5d21bbcf75a61706d8ab549
FileHash-MD5 db349b97c37d22f5ea1d1841e3c89eb4
FileHash-MD5 f107a717f76f4f910ae9cb4dc5290594
FileHash-SHA1 51e4307093f8ca8854359c0ac882ddca427a813c
FileHash-SHA1 87420a2791d18dad3f18be436045280a4cc16fc4
FileHash-SHA1 e889544aff85ffaf8b0d0da705105dee7c97fe26
FileHash-SHA1 45356a9dd616ed7161a3b9192e2f318d0ab5ad10
FileHash-SHA1 bd44d0ab543bf814d93b719c24e90d8dd7111234
FileHash-SHA256 2ca2d550e603d74dedda03156023135b38da3630cb014e3d00b1263358c5f00d
FileHash-SHA256 4a468603fdcb7a2eb5770705898cf9ef37aade532a7964642ecd705a74794b79

Parches de Seguridad

La siguiente tabla incluye los diferentes parches para mitigar las vulnerabilidades explotadas por este malware:

Nombre Vulnerabilidad Parche
EternalBlue EternalSynergy EternalRomance EternalChampion MS17-010 msft-cve-2017-0143 msft-cve-2017-0144 msft-cve-2017-0145 msft-cve-2017-0146 msft-cve-2017-0147 msft-cve-2017-0148
EmeraldThread MS10-061 WINDOWS-HOTFIX-MS10-061
EducatedScholar MS09-050 WINDOWS-HOTFIX-MS09-050
EclipsedWing MS08-067 WINDOWS-HOTFIX-MS08-067

Exploits Disponibles

La siguiente table incluye los exploits utilizados por el malware para la explotacion de las vulnerabilidades.

Nombre Vulnerabilidad Modulo Metasploit
EternalBlue MS17-010 auxiliary/scanner/smb/smb_ms17_010
EmeraldThread MS10-061 exploit/windows/smb/psexec
EternalChampion MS17-010 auxiliary/scanner/smb/smb_ms17_010
EternalRomance MS17-010 auxiliary/scanner/smb/smb_ms17_010
EducatedScholar MS09-050 auxiliary/dos/windows/smb/ms09_050_smb2_negotiate_pidhigh, auxiliary/dos/windows/smb/ms09_050_smb2_session_logoff, exploits/windows/smb/ms09_050_smb2_negotiate_func_index
EternalSynergy MS17-010 auxiliary/scanner/smb/smb_ms17_010
EclipsedWing MS08-067 auxiliary/scanner/smb/ms08_067_check exploits/windows/smb/ms08_067_netapi

Direcciones IPs del Malware


Imágenes del Ransomware


img1 img2 img3

URL descubiertas utilizadas por el Malware

  • hxxtp://www[.]btcfrog[.]com/qr/bitcoinpng[.]php?address
  • hxxp://www[.]rentasyventas([.])com/incluir/rk/imagenes[.]html
  • hxxp://www[.]rentasyventas[.]com/incluir/rk/imagenes[.]html?retencion=081525418
  • hxxp://www[.]iuqerfsodp9ifjaposdfjhgosurijfaewrwergwea[.]com

Nodos TOR utilizados ed01ebfbc9eb5bbea545af4d01bf5f1071661840480439c6e5babe8e080e41aa



Reglas para detectar IoC:

Las siguientes Reglas ayudan a la rapida deteccion de una infeccion.

Ficheros modificados:



Las siguientes Reglas de SNORT ayudan a una rapida deteccion de la infeccion.

alert tcp $EXTERNAL_NET any -> $HOME_NET 445 (msg:"OS-WINDOWS Microsoft Windows SMB remote code execution attempt"; flow:to_server,established; content:"|FF|SMB3|00 00 00 00|"; depth:9; offset:4; byte_extract:2,26,TotalDataCount,relative,little; byte_test:2,>,TotalDataCount,20,relative,little; metadata:policy balanced-ips drop, policy connectivity-ips drop, policy security-ips drop, ruleset community, service netbios-ssn; reference:cve,2017-0144; reference:cve,2017-0146; reference:url,; reference:url,; classtype:attempted-admin; sid:41978; rev:3;)
alert tcp any any -> $HOME_NET 445 (msg:"OS-WINDOWS Microsoft Windows SMB large NT RENAME transaction request information leak attempt"; flow:to_server,established; content:"|FF|SMB|A0 00 00 00 00|"; depth:9; offset:4; content:"|05 00|"; within:2; distance:60; byte_test:2,>,1024,0,relative,little; metadata:policy balanced-ips drop, policy security-ips drop, ruleset community, service netbios-ssn; reference:url,; reference:url,; classtype:attempted-recon; sid:42338; rev:1;)
alert tcp $HOME_NET 445 -> any any (msg:"OS-WINDOWS Microsoft Windows SMB possible leak of kernel heap memory"; flow:to_client,established; content:"Frag"; fast_pattern; content:"Free"; content:"|FA FF FF|"; content:"|F8 FF FF|"; within:3; distance:5; content:"|F8 FF FF|"; within:3; distance:5; metadata:policy balanced-ips alert, policy security-ips drop, ruleset community, service netbios-ssn; reference:cve,2017-0147; reference:url,; classtype:attempted-recon; sid:42339; rev:2;)
alert tcp any any -> $HOME_NET 445 (msg:"DOUBLEPULSAR SMB implant - Unimplemented Trans2 Session Setup Subcommand Request"; flow:to_server, established; content:"|FF|SMB|32|"; depth:5; offset:4; content:"|0E 00|"; distance:56; within:2; reference:url,; sid:1618009; classtype:attempted-user; rev:1;)
alert tcp $HOME_NET 445 -> any any (msg:"DOUBLEPULSAR SMB implant - Unimplemented Trans2 Session Setup Subcommand - 81 Response"; flow:to_client, established; content:"|FF|SMB|32|"; depth:5; offset:4; content:"|51 00|"; distance:25; within:2; reference:url,; sid:1618008; classtype:attempted-user; rev:1;)
alert tcp $HOME_NET 445 -> any any (msg:"DOUBLEPULSAR SMB implant - Unimplemented Trans2 Session Setup Subcommand - 82 Response"; flow:to_client, established; content:"|FF|SMB|32|"; depth:5; offset:4; content:"|52 00|"; distance:25; within:2; reference:url,; sid:1618010; classtype:attempted-user; rev:1;)

Claves de registro afectadas

  • HKEY_LOCAL_MACHINE\Software\Microsoft\Windows NT\CurrentVersion\IMM
  • HKEY_USERS\S-1-5-21-1547161642-507921405-839522115-1004\Software\Microsoft\Windows NT\CurrentVersion\AppCompatFlags\Layers
  • HKEY_LOCAL_MACHINE\Software\Microsoft\CTF\SystemShared
  • HKEY_USERS\S-1-5-21-1547161642-507921405-839522115-1004
  • HKEY_LOCAL_MACHINE\Software\WanaCrypt0r
  • HKEY_CURRENT_USER\Software\WanaCrypt0r
  • HKEY_LOCAL_MACHINE\Software\Microsoft\Windows NT\CurrentVersion\ProfileList\S-1-5-21-1547161642-507921405-839522115-1004
  • HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\Session Manager
  • HKEY_LOCAL_MACHINE\Software\Microsoft\Windows NT\currentVersion\Time Zones\W. Europe Standard Time
  • HKEY_LOCAL_MACHINE\Software\Microsoft\Windows NT\CurrentVersion\Time Zones\W. Europe Standard Time\Dynamic DST

Detalles de Hash Ransomware

Campo Valor
FILE NAME WanaDecryptor.exe
FILE SIZE 245760 bytes
FILE TYPE PE32 executable (GUI) Intel 80386, for MS Windows
MD5 7bf2b57f2a205768755c07f238fb32cc
SHA1 45356a9dd616ed7161a3b9192e2f318d0ab5ad10
SHA256 b9c5d4339809e0ad9a00d4d3dd26fdf44a32819a54abf846bb9b560d81391c25
SHA512 91a39e919296cb5c6eccba710b780519d90035175aa460ec6dbe631324e5e5753bd8d87f395b5481bcd7e1ad623b31a34382d81faae06bef60ec28b49c3122a9
CRC32 4E6C168D
SSDEEP 3072:Rmrhd5U1eigWcR+uiUg6p4FLlG4tlL8z+mmCeHFZjoHEo3m:REd5+IZiZhLlG4AimmCo
YARA None matched
% 4a468603fdcb7a2eb5770705898cf9ef37aade532a7964642ecd705a74794b79
% 24d004a104d4d54034dbcffc2a4b19a11f39008a575aa614ea04703480b1022c
% b9c5d4339809e0ad9a00d4d3dd26fdf44a32819a54abf846bb9b560d81391c25
% ed01ebfbc9eb5bbea545af4d01bf5f1071661840480439c6e5babe8e080e41aa

Mitigaciones de alto nivel Recomendadas

Medidas Reactivas

  • Desconectar equipo de la red.

  • Aplicar herramientas actuales de Anti-Ransomware (en caso que estén disponibles) liberadas para cepas ya conocidas, como por ejemplo: HidraCrypt, Petya, etc.

  • Reportar a la brigada de cibercrimen este tipo de delitos para enviar la señal que este tipo de incidentes si son delitos y debe perseguirse las responsabilidades penales de los involucrados, al verse afectada la fe pública, los sistemas institucionales, y la privacidad de la información de ciudadanos.

  • Si la identificación del Ransomware ocurre mientras esta cifrando el disco, sacar disco, y buscar posibles la llave de cifrado para revertir el proceso.”

Buenas Prácticas Generales:

  • Tener una declaración de activos críticos actualizada y políticas de protección ad hoc para la protección de dichos activos priorizados en base a los riesgos (probabilidad de materialización de una amenaza versus impacto de dicha materialización, por ejemplo).

  • Verificar que los activos críticos se encuentren respaldados con pruebas de recuperación realizas con una frecuencia acorde a la criticidad de los activos y las ventanas de tiempo óptimas ante potenciales pérdidas de datos y los niveles de confianza de las herramientas de respaldo implementados.

  • Evitar el uso cotidiano de cuentas de administrador tanto de dominio como local, para usos que no requieran esos privilegios elevados. Las actividades deben realizarse en general con el perfil de usuario normal.

  • Los equipos que no cuenten con últimas versiones de actualizaciones en Sistemas Operativos y programas como flash, java, adobe, Internet Explorer se recomienda que no sean conectados a Internet.

  • En el contexto del manejo de entorno de los computadores de usuarios, es necesario para mitigar técnicas de ataque en las que se pretende esconder la extensión real de archivos enviados a los usuarios, obligar al sistema operativo a mostrarla. En conjunto a esta medida se debe educar al usuario para que sepa reconocer las extensiones y cuáles de ellas son potencialmente peligrosas. Para aplicar el control que muestre las extensiones, debe aplicarse en lo posible mediante política (GPO) a todos los equipos o en su defecto para algún caso relevante, verificando que en propiedades de Windows esté habilitado “No esconder extensiones de archivos”.

  • Si la institución ha de mantener aplicaciones “legacy” sobre sistemas operativos que ya no cuentan con soporte de seguridad por parte del fabricante, debiera considerar no exponer a internet estos equipos, en atención a su alta vulnerabilidad y probabilidad de ser impactados por malware.


Centro de Ciber Inteligencia de Entel


  • Alerta CCN-CERT:
  • Microsoft Security Bulletin:
  • Información sobre:,-smbv2,-and-smbv3-in-windows-vista,-windows-server-2008,-windows-7,-windows-server-2008-r2,-windows-8,-and-windows-server-2012

  • Laboratorio interno CCI-ENTEL


Mar 15, 2016 - (es) Empezando el 2016 con Vagrant


(es) Jugando con Vagrant y empezando a escribir algo !

Basicamente vagrant ha sido un de las herramientas que he utilizado por un largo tiempo para no ensuciar mi computador, es facil poder usar y crear cualquier ambiente de desarrollo, pero en muchas empresas y sistemas legacy de mi pais aún no se adopta. Dockers por otro lado ha llamado la atención constantemente aun cuando funcionan y se usan para distintos tipos de escenarios, ambos pueden crear un ambiente homologado de desarrollo.

Este post es directamente para vagrant ya que dentro de las ventajas para crear ambientes de desarrollo local, es que es multiplaforma.

Se puede utilizar como agente virtualizador: Virtualbox o Vmware, en mi caso he dejado vmware solo para cuando quiero utilizar un ambiente completo de un sistema operativo y virtualbox para correr Dockers Machine y maquinas de Vagrant. Este último ocupa cajas para ir encapsulando lo escencial del sistema operativo y al igual que github se encuentra en repositorio general donde podemos encontrar un sin fin e inumberables distribuciones de boxes para cualquier fin que queramos.

Basandome en un post de @greyfocus, que ha sido uno de los que mas me ha gustado para por ejemplo crear un ambiente local para usar Jekyll para este blog.

PD: He traducido dentro de lo razonable, cualquier correción es bienvenida


Para instalar se requieren hacer lo siguientes pasos para instalar Jekyll, esto puede ser en windows/mac/osx, usando Vagrant:

  1. Instalar Vagrant
  2. Instalar VirtualBox
  3. Instalar el plugin de vagrant: vagrant plugin install vagrant-vbguest
  4. Clonar este repositorio el repo de, que tiene toda la magia:
     git clone
  5. Comenzando la magia de vagrant
  vagrant up

Trabajando con vagrant

El concepto es igual que tener un ambiente en tu sistema operativo local. Se usa el editor favorito o IDE o lo que desees para complicarte la vida para modificar los archivos .md o simplemente vi o vim. En el archivo Vagrantile del repositorio que hemos clonado tiene una receta de como empezar y además un llamado a un archivo shell para hacer provision. Esta receta lo que hace es que permite de manera declarativa indicar a Vagrant la boxes que usar en este caso un ubuntu, la carpeta y el puerto al cual forwardear. Por defecto vagrant crea una carpeta compartida desde donde se ejecutra el vagrant up y el guest en la ruta /vagrant

Cuando ya esta listo …y no hay que ser impacientes porque la demora no es exclusivamente porque tu computador sea una tortuga, mas bien porque en la “receta” muchas veces se instalan librerias ó se actualiza. En fin para conectars a la máquina Vagrant por consola ssh:

vagrant ssh

En nuestro caso particular, como estamos usando jekyll ejecutamos build y server que son para hacer correr jekyll.

  jekyll build 
  jekyll serve --host

Como no configuramos ninguna IP, podemos ver corriendo nuestro maquina vagrant jekyll en localhost, osea:

Happy vagrant up!

Jun 27, 2015 - Install Drush on CentOS or Any base Unix


Step by step

  1. Check if you have root access
  2. sudo yum install php-pear (RedHat based)
  3. pear channel-discover
  4. pear install drush/drush
  5. Check if Console table library is installed 5.1 if yes just use drush on you root path of drupal installtion 5.2 else install Console table: (more info)[] - sudo requiered - cd /usr/share/pear/drush/lib/ - wget - tar -zxvf Console_Table-1.1.3.tgz - rm -fr .tgz - run *drush, test and use =)

Jun 25, 2015 - Mysqli extension for linux vps via whm


For MySQLi extension in your WHM panel (access root required), follow these steps:

  1. Log into the WHM with your root credentials.
  2. Go to the “EasyApache (Apache Update)” menu, located in the “Software” section or use the search box to find it.
  3. On the EasyApache page, make sure your Previously Saved (Default) configuration is selected and click on “Customize Profile”.
  4. Keep clicking “Next Step”, until you reach the “Short Options List” page and scroll to the bottom of the page.
  5. Click on the “Exhaustive Options List” button.
  6. On this page, scroll down to the PHP section and find MySQL “Improved” extension. You can use the page search option of your browser to locate the extension faster (Ctrl+F).
  7. Ensure the check box is filled in and scroll to the bottom.
  8. Click the “Save Only” button.
  9. On the next page, click the “Build profile I just saved” button.
  10. A pop box will appear and ask you to recompile Apache and PHP, select “Yes” and “I understand”, if prompted.
  11. Wait until the Build ouput is complete and the MySQLi extension should be installed/enabled. Please do not log out of the WHM or interrupt the rebuild process and wait for it to be completed.