KINTO Tech Blog
Generative AI

Tips for Safe AI Image Editing—Learning from Azure OpenAI Service

Cover Image for Tips for Safe AI Image Editing—Learning from Azure OpenAI Service

Tips for Safe AI Image Editing—Learning from Azure OpenAI Service

Introduction

As of October 2025, AI-related news is reported daily, covering topics like how amazing the new model is or what it can do now.

Since technological progress is remarkable, complex images can currently be edited easily with AI tools. While taking advantage of them, we should not forget the perspective of how to use them safely.

We are often captivated by performance aspects such as what amazing images can be created or how naturally they can be edited, but it is equally important to consider whether generated images could potentially harm others.

Learning Responsible AI from Enterprise Services

Microsoft's Azure OpenAI Service (AOAI) is a very helpful reference from this perspective.

AOAI is designed for enterprises and government agencies. While it uses the same technology as OpenAI, it incorporates more stringently ethical standards (guardrails) based on the principles of Responsible AI.
Of course, OpenAI also upholds similar principles, but AOAI is operated more carefully.

Why Does AOAI Have Stricter Policies?

For example, when a corporation uses OpenAI's API, personal identity verification of the administrator is required to prevent unauthorized use. This is an approach clarifying the responsibility of the individuals who operate the API.

On the other hand, AOAI is a corporate cloud service, and its use is authorized under Azure contracts and authentication infrastructure (e.g., Microsoft Entra ID). It is not realistic for each employee to submit identification documents to use a service within a company. AOAI takes this into consideration and eliminates the need for individual authentication by trusting the corporation. In short, trust is placed in the organization.

With this approach, AOAI, as an enterprise platform, has set up particularly stringent restrictions (guardrails) on its use.

Examples of Operations Blocked by AOAI

So, what specific restrictions are in place?
Based on my testing experience, the following operations were blocked by AOAI. (For testing, I used the gpt-image-1 model.)

  • Instructions to edit images of minors (including context-based judgments, like someone in a school uniform)
  • Including insulting or discriminatory words in prompts
  • Instructions to swap the face of a specific individual

These restrictions show that AOAI focuses on at least the following three points:

AOAI's Three Key Areas of Focus

  • Protecting minors who are in socially vulnerable positions
  • Protecting human dignity from verbal violence such as discrimination and insults
  • Preventing the spread of misinformation through deepfakes and similar technologies

To put it simply, this isn't a wall to restrict freedom but a framework to protect others and society.

Such framework is extremely helpful for avoiding unintentionally hurting someone and for not accidentally becoming a perpetrator.

Transparency in the AI Era: Content Credentials

Recently, as image generation and editing have become easily accessible to individuals, the risk of unintentionally hurting others has also increased.

However, to address such challenges, images generated or edited by AI tools often have Content Credentials, metadata embedded in the images.

This is a new system designed to ensure transparency in AI generation. It records information such as:

  • When it was created;
  • What tools were used; and
  • Who edited it.

It is like a digital version of a nutrition facts label.

The system enables the detection of content tampering and incidental modification.
In other words, it allows people who properly use AI to publish content with confidence and authenticity.

Summary

Instead of getting swept away by convenience, take a moment to ask yourself: "Could this content cause harm to others?"

Building up this kind of awareness is the first step toward using AI safely. I believe that, ultimately, this protects both you and your organization.

Facebook

関連記事 | Related Posts

We are hiring!

生成AIエンジニア/AIファーストG/東京・名古屋・大阪・福岡

AIファーストGについて生成AIの活用を通じて、KINTO及びKINTOテクノロジーズへ事業貢献することをミッションに2024年1月に新設されたプロジェクトチームです。生成AI技術は生まれて日が浅く、その技術を業務活用する仕事には定説がありません。

【クラウドエンジニア】Cloud Infrastructure G/東京・大阪・福岡

KINTO Tech BlogWantedlyストーリーCloud InfrastructureグループについてAWSを主としたクラウドインフラの設計、構築、運用を主に担当しています。

イベント情報

[Mirror]不確実な事業環境を突破した、成長企業6社独自のエンジニアリング
製造業でも生成AI活用したい!名古屋LLM MeetUp#11
会社の中で使えるファシリテーションスキルを向上するための研究会 #5