The proliferation of AI and cloud services has enabled our work to achieve unprecedented speed and efficiency. However, environments where tools automatically integrate also heighten the risk of information leaks occurring in unforeseen ways and of unauthorized external access.
Small oversights in cloud configuration, generative AI usage, or information handling can unexpectedly lead to data breaches.
Furthermore, as related regulations evolve, data breaches are no longer merely system failures but have become critical management issues directly impacting a company’s overall legal liability.
This article clearly explains the countermeasures companies should take from four perspectives—“Rules,” “Technology,” “People,” and “Contracts”—while organizing the patterns of information leaks that are actually likely to occur in the AI and cloud era.
To balance “convenience” and “safety,” let’s grasp the key points of security design that can be applied in practical operations.
Why Information Leaks Occur More Frequently in the AI and Cloud Era
First, we’ll explain the factors that commonly lead to information leaks in real-world scenarios, categorized into three representative patterns.
① Information Leaks Due to Configuration Errors
While cloud services can be set up in just minutes, their sharing, permission, and external integration settings are highly granular. Common configuration mistakes like the following can become direct entry points for information leaks:
Setting a shared link to “Anyone with the link can view”
Accidentally publishing unrelated documents when setting folder permissions
Leaving guest invitations enabled, allowing access to accounts belonging to former employees or external parties
Cloud incidents are often characterized less by being “breached” and more by users inadvertently creating visible access points themselves.
② Information input into generative AI may leak
Most issues with generative AI stem not from specialized attacks, but from minor oversights in input during routine tasks. For example, text fed into AI may inadvertently contain information like:
Personal information such as customer names, addresses, and phone numbers
Confidential data like quotes, contracts, and unpublished materials
Sensitive information including inquiry histories and medical records
Technical information like source code, design documents, API keys
Always remember that with generative AI, “input = sharing.” This is the fundamental difference from traditional tools.
③External integrations become entry points for leaks
In cloud operations, internal measures alone are insufficient for protection. In practice, integrations with external services like these are increasing:
Account integrations with accounting, chat, storage, etc.
Addition of extensions or plugins
Outsourcing development or operations
Guest accounts for shared folders used by business partners
In this environment, lax management in even one area can become an entry point, allowing damage to spread rapidly. For example, if a business partner’s or external tool’s account is misused, access to your company’s systems and data could be gained through that integration.
Cybersecurity Laws Companies Must Comply With
With AI and cloud adoption now commonplace, cybersecurity is no longer solely an IT department issue—it directly impacts the responsibility of the entire company.
In cases of data breaches or unauthorized access, simply claiming “it was an attack, so it couldn’t be helped” is insufficient. The company, as the managing entity, faces liability under multiple laws.
Here, we outline key laws companies should particularly heed and their practical implications.
Act on the Protection of Personal Information
The Act on the Protection of Personal Information is the law under which companies are most directly liable in cybersecurity matters. This law imposes obligations on businesses handling personal information of customers, employees, and others, including the “duty to manage it securely,” the “duty to supervise contractors,” and the “duty to report leaks.”
This covers not only names, addresses, and phone numbers, but also medical information, inquiry histories, membership data, and more. If a leak occurs, companies face the risk of having to report to the Personal Information Protection Commission, notify affected individuals, and potentially face claims for damages.
Act on Prohibition of Unauthorized Access to Computer Systems
While this law primarily penalizes “intruders,” it also holds companies responsible for establishing management systems that make intrusion difficult.
This includes proper management of IDs and passwords, reviewing access permissions, and regularly verifying that security settings function correctly. Furthermore, if risks are deemed to be increasing, it is crucial to promptly strengthen countermeasures and prepare defenses against unauthorized access.
Moreover, if personal information leaks due to unauthorized login, it also constitutes a violation of the Act on the Protection of Personal Information.
Basic Act on Cybersecurity
This law does not directly impose penalties on companies or the state. Instead, it establishes the “basic policy” for cybersecurity measures across Japan. It stipulates that the state, local governments, companies, and citizens each bear the responsibility to strive to ensure cybersecurity.
The Basic Act on Cybersecurity mandates that all businesses, regardless of size, have a duty to implement cybersecurity measures. Therefore, the notion that “small companies are exempt” does not hold water, making thorough implementation of countermeasures crucial.
Basic Design for Information Leak Prevention
Security measures must be implemented by combining four elements: people, rules, technology, and contracts. If any one element is missing, that weakness can lead to unexpected incidents.
People (Education & Habits)
Most information leaks occur not from malice, but from carelessness. That’s why it’s crucial to establish habits around rules that enable immediate action on the ground, rather than complex regulations. For example, simply organizing and communicating clear guidelines—like how to create shared links, the scope of information permissible outside the company, or what should never be input into generative AI—can significantly reduce the probability of incidents.
Furthermore, holding regular short study sessions to share actual case studies is more effective for embedding awareness than annual group training sessions. It’s also crucial to foster an organizational culture where near-miss experiences (“close calls”) can be shared openly without blame. Collecting small observations early makes it easier to prevent major incidents.
Rules
Attempting to manage all data with the same strictness inevitably creates operational strain. A more realistic approach is to categorize data by importance. Some information, like personal data, medical records, or undisclosed financial data, must never be shared externally. Other data may be restricted to internal use under certain conditions, while some public information is safe to share externally.
Building on this classification, linking standards for what can be input into generative AI, what can be shared in the cloud, and what data can be provided to contractors allows frontline staff to make decisions without hesitation, like “Is this okay?” It’s more important to structure rules in a way that eliminates uncertainty than to simply increase their number.
Technology
You don’t need to perfect all security measures at once. The key is to first establish a solid minimum defense line that reliably prevents incidents. In particular, simply setting up a system that requires an additional verification step beyond the password for accounts with high privileges—such as those for administrators, accounting, HR, and sales—can prevent many account takeovers.
Furthermore, rigorously enforce the principle of “least privilege,” granting only the permissions necessary for specific tasks. When exceptions requiring stronger privileges are made, always set clear expiration dates and manage them strictly. Additionally, it’s vital to log actions like accessing critical data, creating shareable links, and modifying administrator privileges so they can be reviewed later.
Furthermore, encrypting data both during transmission and at rest, managing multiple backups to ensure reliable recovery even if systems are disabled by cyberattacks, and conducting regular recovery drills constitute effective countermeasures.
Contracts
When utilizing cloud services or outsourcing, it is essential to objectively clarify “who bears responsibility and to what extent” within contracts and terms of service. Key points to confirm in advance include: responsibility for data breaches, the permissibility of subcontracting, reporting obligations and response speed in case of incidents, access rights management methods, and the availability of log retention and audits.
The Personal Information Protection Commission also emphasizes that when using cloud services, it is crucial for businesses to understand the specifics of security measures and clearly define agreed terms in contracts and terms of service. Rather than assuming “famous companies or services are safe,” it is essential to recognize that security measures are only complete when the contract is finalized.
Internal Rules for Safe Use of Generative AI
To safely utilize generative AI, it is essential for companies to establish clear policies and operational rules rather than leaving decisions to individual judgment. The foundation for this approach is the existence of the national “AI Business Operator Guidelines.”
What the AI Business Operator Guidelines Require
The Ministry of Economy, Trade and Industry (METI) has established the AI Business Operator Guidelines, which outline common principles and guidelines for safe and secure AI utilization. The positioning of these guidelines is explained as follows:
“These guidelines present fundamental principles regarding necessary measures for AI development, provision, and utilization. Therefore, it is crucial that all businesses engaged in AI utilization independently promote specific measures, using these guidelines as one reference point in their actual AI development, provision, and utilization activities.”
Source:Ministry of Economy, Trade and Industry, AI Business Operator Guidelines (Version 1.1)
This makes clear that companies are expected to proactively establish clear policies and internal guidelines for the safe use of AI and demonstrate a commitment to implementing them at the operational level.
“OK/NG” Decision Criteria to Avoid Confusion on the Front Lines
No matter how impressive an AI usage policy may be, if front-line staff don’t know “how to act,” it won’t function as an effective measure. Therefore, when translating internal rules into on-site practices, it is crucial to provide “criteria that allow immediate judgment on whether it is acceptable or unacceptable to use AI in a given situation,” rather than relying on complex technical jargon.
The following table summarizes decision criteria to avoid confusion in practical operations.
|
Category |
Examples |
Points to Note |
|
A:Prohibited |
|
Never input into AI under any circumstances. Any data leakage may lead directly to legal liability. |
|
B:Conditional |
|
Proper nouns and numerical data must be anonymized so that individuals or companies cannot be identified. Use is limited to internal purposes only. |
|
C:Allowed |
|
Final checks must be conducted internally before use. The person in charge is responsible for management and approval. |
Basic Rules for Using Generative AI Tools
Next in importance are the operational rules for generative AI. The fundamental principle is to limit AI used for business purposes to only tools approved by the company, and not to arbitrarily use external services based on field judgment. Furthermore, it is essential to confirm in advance whether the AI stores input data, whether logs are retained, and whether the data might be used for training.
Furthermore, outputs from generative AI must never be used as definitive answers without final human verification. This is especially critical in fields like legal, medical, and financial services, where errors directly translate to risk; the human decision-making process must remain intact. For highly confidential tasks, operations should be restricted to internal or dedicated environments to minimize the risk of information leakage.
Design for Safe, Continuous Use in the AI and Cloud Era
While AI and cloud technologies significantly enhance operational efficiency, they also bring constant risks like information leaks and legal liability. This is precisely why it’s crucial for companies to establish systems and processes focused not on “not using” these technologies, but on “how to use them safely and continuously.”
Designing operations that encompass human awareness, rules, technology, and contracts is essential. Particularly for generative AI, understanding its inherent characteristic that “input equals shared output” demands careful handling and rigorous final verification processes.
One approach to balancing this security with operational efficiency is to utilize specialized AI services like “DIP Ceph,” which are tailored for specific professional tasks and designed with careful information handling in mind. If you aim to reduce the burden on frontline staff while improving diagnostic quality and the clarity of explanations, exploring such specialized tools as part of your information gathering process is a worthwhile consideration.
For details on actual features and usage scenarios, please refer to the official introduction page.