AI Usage: Trust, Responsibility, and Security

aiglobal-platform
Adopt

AI Usage: Trust, Responsibility, and Security

While we wait for the official publication of our AI Policy, here are the "rules of the road" for using AI right now. There are some guidelines to help you in your discovery and trials.
The idea is simple: encourage innovation through a foundation of trust and individual accountability.

RED ZONE: STRICTLY PROHIBITED

Sensitive Data (PII/Financials/Commercial/Secrets): NEVER enter confidential data (client info or company secrets) into a public or your personal owned AI tool. If it shouldn't leave the company, it doesn't go into the prompt.Zero Autonomous Critical Actions (Human in the Loop): It is forbidden to let an AI make a final decision or execute a sensitive action (e.g., sending an external email, merging code, deleting data) without human validation. A human must always have the final say.

GREEN ZONE: Individual Use (Personal Productivity)

What is it? You use AI for your own efficiency (summarizing, drafting, coding).The Rule: You are the Pilot.Don't forget: AI doesn't "know"; it predicts. It can make mistakes.You are 100% responsible for the quality and accuracy of your output, even if it was generated by AI.

GRAY ZONE: Creating & Sharing Tools (Gems, Agents, Bots)

What is it? You configure a "Gem," an agent, or a script, and you share it with the team.The Rule: "You Build It, You Own It".The moment you share a tool, you stop being a simple user and become a product owner.Your Responsibility: You must guarantee that your tool will not leak data—even if a colleague uses it incorrectly.The Test: If you cannot guarantee the security of the data passing through your agent: DO NOT SHARE IT. Keep it for yourself. These common-sense principles apply immediately to protect us all.

More info:

Platform: global-platform