A medical clinic in Germany has been left vulnerable after its staff built a custom patient management system using an AI coding agent. The application was published to the internet without proper security measures, leaving patient data exposed and unencrypted. An investigation revealed that the database was hosted on a US server without a necessary agreement, and voice recordings were being sent to major US-based AI companies without consent. The staff member who built the system was unaware of its potential risks and vulnerabilities, highlighting concerns about the ease with which non-technical individuals can create software using AI coding agents.
The system's architecture was basic, consisting of a single HTML file with inline JavaScript, CSS, and structure. Access control logic was not implemented on the server side, leaving data vulnerable to unauthorized access. External AI APIs were used to transcribe and summarize audio recordings, further increasing potential security risks. The incident raises questions about the lack of technical expertise among those using AI coding agents and the consequences that can arise from building complex software without proper understanding.
The clinic's patient management system was accessible with just a few lines of code, highlighting concerns about data protection and compliance with regulations such as the nDSG law and professional secrecy laws. The incident serves as a warning to founders and entrepreneurs who may be tempted to use AI coding agents for quick development, but lack technical expertise.
The clinic's response to the issue was inadequate, with the staff member claiming they had taken action by adding basic authentication and rotating access keys. However, this did not address the underlying security vulnerabilities, leaving patient data still exposed. The incident highlights the need for proper training and education in software development, particularly when using AI coding agents that can create complex systems without technical expertise.